Oct 31 00:38:45.022343 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 00:38:45.022372 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:38:45.022388 kernel: BIOS-provided physical RAM map: Oct 31 00:38:45.022397 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 31 00:38:45.022405 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 31 00:38:45.022414 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 31 00:38:45.022424 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 31 00:38:45.022433 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 31 00:38:45.022441 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 31 00:38:45.022450 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 31 00:38:45.022463 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 31 00:38:45.022472 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 31 00:38:45.022485 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 31 00:38:45.022494 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 31 00:38:45.022508 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 31 00:38:45.022518 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 31 00:38:45.022531 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 31 00:38:45.022541 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 31 00:38:45.022550 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 31 00:38:45.022560 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 00:38:45.022569 kernel: NX (Execute Disable) protection: active Oct 31 00:38:45.022579 kernel: APIC: Static calls initialized Oct 31 00:38:45.022588 kernel: efi: EFI v2.7 by EDK II Oct 31 00:38:45.022598 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Oct 31 00:38:45.022625 kernel: SMBIOS 2.8 present. Oct 31 00:38:45.022635 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 31 00:38:45.022644 kernel: Hypervisor detected: KVM Oct 31 00:38:45.022658 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 00:38:45.022668 kernel: kvm-clock: using sched offset of 5878803849 cycles Oct 31 00:38:45.022678 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 00:38:45.022688 kernel: tsc: Detected 2794.748 MHz processor Oct 31 00:38:45.022699 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 00:38:45.022709 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 00:38:45.022719 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 31 00:38:45.022729 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 31 00:38:45.022739 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 00:38:45.022752 kernel: Using GB pages for direct mapping Oct 31 00:38:45.022762 kernel: Secure boot disabled Oct 31 00:38:45.022772 kernel: ACPI: Early table checksum verification disabled Oct 31 00:38:45.022782 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 31 00:38:45.022798 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 31 00:38:45.022808 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:38:45.022818 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:38:45.022832 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 31 00:38:45.022843 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:38:45.022858 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:38:45.022868 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:38:45.022879 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:38:45.022889 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 31 00:38:45.022899 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 31 00:38:45.022913 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 31 00:38:45.022924 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 31 00:38:45.022934 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 31 00:38:45.022944 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 31 00:38:45.022954 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 31 00:38:45.022965 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 31 00:38:45.022975 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 31 00:38:45.022986 kernel: No NUMA configuration found Oct 31 00:38:45.022999 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 31 00:38:45.023014 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 31 00:38:45.023024 kernel: Zone ranges: Oct 31 00:38:45.023033 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 00:38:45.023043 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 31 00:38:45.023053 kernel: Normal empty Oct 31 00:38:45.023063 kernel: Movable zone start for each node Oct 31 00:38:45.023072 kernel: Early memory node ranges Oct 31 00:38:45.023082 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 31 00:38:45.023092 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 31 00:38:45.023102 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 31 00:38:45.023116 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 31 00:38:45.023135 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 31 00:38:45.023145 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 31 00:38:45.023158 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 31 00:38:45.023167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:38:45.023177 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 31 00:38:45.023188 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 31 00:38:45.023197 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:38:45.023207 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 31 00:38:45.023221 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 31 00:38:45.023231 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 31 00:38:45.023241 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 00:38:45.023250 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 00:38:45.023260 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 00:38:45.023270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 00:38:45.023280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 00:38:45.023290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 00:38:45.023300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 00:38:45.023313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 00:38:45.023323 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 00:38:45.023333 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 00:38:45.023343 kernel: TSC deadline timer available Oct 31 00:38:45.023353 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 31 00:38:45.023363 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 00:38:45.023373 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 00:38:45.023382 kernel: kvm-guest: setup PV sched yield Oct 31 00:38:45.023392 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 31 00:38:45.023405 kernel: Booting paravirtualized kernel on KVM Oct 31 00:38:45.023415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 00:38:45.023425 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 31 00:38:45.023435 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Oct 31 00:38:45.023445 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Oct 31 00:38:45.023455 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 00:38:45.023465 kernel: kvm-guest: PV spinlocks enabled Oct 31 00:38:45.023475 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 00:38:45.023486 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:38:45.023503 kernel: random: crng init done Oct 31 00:38:45.023513 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 00:38:45.023523 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:38:45.023533 kernel: Fallback order for Node 0: 0 Oct 31 00:38:45.023543 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 31 00:38:45.023553 kernel: Policy zone: DMA32 Oct 31 00:38:45.023563 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:38:45.023573 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 166140K reserved, 0K cma-reserved) Oct 31 00:38:45.023587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 00:38:45.023597 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 00:38:45.023621 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 00:38:45.023631 kernel: Dynamic Preempt: voluntary Oct 31 00:38:45.023641 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:38:45.023667 kernel: rcu: RCU event tracing is enabled. Oct 31 00:38:45.023681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 00:38:45.023692 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:38:45.023702 kernel: Rude variant of Tasks RCU enabled. Oct 31 00:38:45.023712 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:38:45.023719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:38:45.023727 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 00:38:45.023734 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 00:38:45.023745 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 00:38:45.023752 kernel: Console: colour dummy device 80x25 Oct 31 00:38:45.023760 kernel: printk: console [ttyS0] enabled Oct 31 00:38:45.023770 kernel: ACPI: Core revision 20230628 Oct 31 00:38:45.023778 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 00:38:45.023789 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 00:38:45.023796 kernel: x2apic enabled Oct 31 00:38:45.023804 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 00:38:45.023811 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 31 00:38:45.023819 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 31 00:38:45.023826 kernel: kvm-guest: setup PV IPIs Oct 31 00:38:45.023834 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 00:38:45.023841 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 31 00:38:45.023849 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 00:38:45.023858 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 00:38:45.023866 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 00:38:45.023873 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 00:38:45.023881 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 00:38:45.023888 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 00:38:45.023896 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 00:38:45.023903 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 00:38:45.023911 kernel: active return thunk: retbleed_return_thunk Oct 31 00:38:45.023921 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 00:38:45.023928 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 00:38:45.023936 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 00:38:45.023944 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 31 00:38:45.023954 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 31 00:38:45.023964 kernel: active return thunk: srso_return_thunk Oct 31 00:38:45.023975 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 31 00:38:45.023988 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 00:38:45.024001 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 00:38:45.024013 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 00:38:45.024021 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 00:38:45.024029 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 00:38:45.024036 kernel: Freeing SMP alternatives memory: 32K Oct 31 00:38:45.024043 kernel: pid_max: default: 32768 minimum: 301 Oct 31 00:38:45.024063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 00:38:45.024075 kernel: landlock: Up and running. Oct 31 00:38:45.025977 kernel: SELinux: Initializing. Oct 31 00:38:45.025994 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:38:45.026012 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:38:45.026022 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 00:38:45.026032 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:38:45.026042 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:38:45.026053 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:38:45.026064 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 00:38:45.026075 kernel: ... version: 0 Oct 31 00:38:45.026085 kernel: ... bit width: 48 Oct 31 00:38:45.026100 kernel: ... generic registers: 6 Oct 31 00:38:45.026112 kernel: ... value mask: 0000ffffffffffff Oct 31 00:38:45.026133 kernel: ... max period: 00007fffffffffff Oct 31 00:38:45.026144 kernel: ... fixed-purpose events: 0 Oct 31 00:38:45.026155 kernel: ... event mask: 000000000000003f Oct 31 00:38:45.026165 kernel: signal: max sigframe size: 1776 Oct 31 00:38:45.026176 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:38:45.026187 kernel: rcu: Max phase no-delay instances is 400. Oct 31 00:38:45.026198 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:38:45.026208 kernel: smpboot: x86: Booting SMP configuration: Oct 31 00:38:45.026223 kernel: .... node #0, CPUs: #1 #2 #3 Oct 31 00:38:45.026234 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 00:38:45.026245 kernel: smpboot: Max logical packages: 1 Oct 31 00:38:45.026259 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 00:38:45.026284 kernel: devtmpfs: initialized Oct 31 00:38:45.026300 kernel: x86/mm: Memory block size: 128MB Oct 31 00:38:45.026315 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 31 00:38:45.026334 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 31 00:38:45.026352 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 31 00:38:45.026378 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 31 00:38:45.026396 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 31 00:38:45.026414 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:38:45.026432 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 00:38:45.026450 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:38:45.026472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:38:45.026491 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:38:45.026510 kernel: audit: type=2000 audit(1761871123.091:1): state=initialized audit_enabled=0 res=1 Oct 31 00:38:45.026527 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:38:45.026553 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 00:38:45.026571 kernel: cpuidle: using governor menu Oct 31 00:38:45.026590 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:38:45.026632 kernel: dca service started, version 1.12.1 Oct 31 00:38:45.026652 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 00:38:45.026670 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 31 00:38:45.026688 kernel: PCI: Using configuration type 1 for base access Oct 31 00:38:45.026706 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 00:38:45.026731 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:38:45.026749 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 00:38:45.026767 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:38:45.026786 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 00:38:45.026804 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:38:45.026822 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:38:45.026839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:38:45.026857 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:38:45.026876 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 00:38:45.026893 kernel: ACPI: Interpreter enabled Oct 31 00:38:45.026917 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 00:38:45.026935 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 00:38:45.026953 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 00:38:45.026971 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 00:38:45.026989 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 00:38:45.027007 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 00:38:45.027389 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:38:45.027673 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 00:38:45.027920 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 00:38:45.027938 kernel: PCI host bridge to bus 0000:00 Oct 31 00:38:45.028227 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 00:38:45.028455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 00:38:45.028678 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 00:38:45.030895 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 31 00:38:45.031090 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 00:38:45.031266 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 31 00:38:45.031418 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 00:38:45.031655 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 00:38:45.031844 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 31 00:38:45.032043 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 31 00:38:45.032306 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 31 00:38:45.032558 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 31 00:38:45.032825 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 31 00:38:45.033081 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 00:38:45.033373 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 00:38:45.033644 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 31 00:38:45.033895 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 31 00:38:45.036166 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 31 00:38:45.036384 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 31 00:38:45.036564 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 31 00:38:45.036763 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 31 00:38:45.036952 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 31 00:38:45.037179 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 31 00:38:45.037358 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 31 00:38:45.037541 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 31 00:38:45.037738 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 31 00:38:45.037914 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 31 00:38:45.038150 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 00:38:45.038330 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 00:38:45.038538 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 00:38:45.038738 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 31 00:38:45.038918 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 31 00:38:45.039104 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 00:38:45.039287 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 31 00:38:45.039303 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 00:38:45.039314 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 00:38:45.039325 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 00:38:45.039335 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 00:38:45.039346 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 00:38:45.039362 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 00:38:45.039372 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 00:38:45.039383 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 00:38:45.039393 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 00:38:45.039404 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 00:38:45.039414 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 00:38:45.039425 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 00:38:45.039435 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 00:38:45.039445 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 00:38:45.039459 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 00:38:45.039470 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 00:38:45.039480 kernel: iommu: Default domain type: Translated Oct 31 00:38:45.039490 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 00:38:45.039500 kernel: efivars: Registered efivars operations Oct 31 00:38:45.039510 kernel: PCI: Using ACPI for IRQ routing Oct 31 00:38:45.041587 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 00:38:45.041601 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 31 00:38:45.041626 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 31 00:38:45.041642 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 31 00:38:45.041653 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 31 00:38:45.041835 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 00:38:45.042011 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 00:38:45.042198 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 00:38:45.042215 kernel: vgaarb: loaded Oct 31 00:38:45.042226 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 00:38:45.042238 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 00:38:45.042249 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 00:38:45.042265 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:38:45.042276 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:38:45.042287 kernel: pnp: PnP ACPI init Oct 31 00:38:45.042510 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 00:38:45.042530 kernel: pnp: PnP ACPI: found 6 devices Oct 31 00:38:45.042541 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 00:38:45.042552 kernel: NET: Registered PF_INET protocol family Oct 31 00:38:45.042563 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 00:38:45.042580 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 00:38:45.042591 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:38:45.042601 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:38:45.042629 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 00:38:45.042640 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 00:38:45.042650 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:38:45.042661 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:38:45.042672 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:38:45.042682 kernel: NET: Registered PF_XDP protocol family Oct 31 00:38:45.042861 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 31 00:38:45.043059 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 31 00:38:45.043231 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 00:38:45.043351 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 00:38:45.043465 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 00:38:45.043593 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 31 00:38:45.043759 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 00:38:45.043965 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 31 00:38:45.044000 kernel: PCI: CLS 0 bytes, default 64 Oct 31 00:38:45.044029 kernel: Initialise system trusted keyrings Oct 31 00:38:45.044055 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 00:38:45.044081 kernel: Key type asymmetric registered Oct 31 00:38:45.044104 kernel: Asymmetric key parser 'x509' registered Oct 31 00:38:45.044135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 00:38:45.044160 kernel: io scheduler mq-deadline registered Oct 31 00:38:45.044181 kernel: io scheduler kyber registered Oct 31 00:38:45.044201 kernel: io scheduler bfq registered Oct 31 00:38:45.044222 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 00:38:45.044246 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 00:38:45.044263 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 00:38:45.044286 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 00:38:45.044303 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:38:45.044323 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 00:38:45.044344 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 00:38:45.044363 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 00:38:45.044391 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 00:38:45.044735 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 00:38:45.044763 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 00:38:45.044961 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 00:38:45.045186 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T00:38:44 UTC (1761871124) Oct 31 00:38:45.045376 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 31 00:38:45.045393 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 31 00:38:45.045404 kernel: efifb: probing for efifb Oct 31 00:38:45.045420 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 31 00:38:45.045431 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 31 00:38:45.045441 kernel: efifb: scrolling: redraw Oct 31 00:38:45.045452 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 31 00:38:45.045462 kernel: Console: switching to colour frame buffer device 100x37 Oct 31 00:38:45.045473 kernel: fb0: EFI VGA frame buffer device Oct 31 00:38:45.045507 kernel: pstore: Using crash dump compression: deflate Oct 31 00:38:45.045521 kernel: pstore: Registered efi_pstore as persistent store backend Oct 31 00:38:45.045532 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:38:45.045546 kernel: Segment Routing with IPv6 Oct 31 00:38:45.045557 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:38:45.045567 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:38:45.045578 kernel: Key type dns_resolver registered Oct 31 00:38:45.045589 kernel: IPI shorthand broadcast: enabled Oct 31 00:38:45.045600 kernel: sched_clock: Marking stable (1005002715, 218263986)->(1465196727, -241930026) Oct 31 00:38:45.045682 kernel: registered taskstats version 1 Oct 31 00:38:45.045694 kernel: Loading compiled-in X.509 certificates Oct 31 00:38:45.045705 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 00:38:45.045721 kernel: Key type .fscrypt registered Oct 31 00:38:45.045731 kernel: Key type fscrypt-provisioning registered Oct 31 00:38:45.045741 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:38:45.045756 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:38:45.045767 kernel: ima: No architecture policies found Oct 31 00:38:45.045777 kernel: clk: Disabling unused clocks Oct 31 00:38:45.045789 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 00:38:45.045801 kernel: Write protecting the kernel read-only data: 36864k Oct 31 00:38:45.045812 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 00:38:45.045827 kernel: Run /init as init process Oct 31 00:38:45.045838 kernel: with arguments: Oct 31 00:38:45.045849 kernel: /init Oct 31 00:38:45.045860 kernel: with environment: Oct 31 00:38:45.045871 kernel: HOME=/ Oct 31 00:38:45.045881 kernel: TERM=linux Oct 31 00:38:45.045895 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:38:45.045910 systemd[1]: Detected virtualization kvm. Oct 31 00:38:45.045927 systemd[1]: Detected architecture x86-64. Oct 31 00:38:45.045939 systemd[1]: Running in initrd. Oct 31 00:38:45.045955 systemd[1]: No hostname configured, using default hostname. Oct 31 00:38:45.045967 systemd[1]: Hostname set to . Oct 31 00:38:45.045980 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:38:45.045997 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:38:45.046009 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:38:45.046022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:38:45.046036 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 00:38:45.046049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:38:45.046060 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 00:38:45.046073 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 00:38:45.046092 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 00:38:45.046103 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 00:38:45.046115 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:38:45.046137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:38:45.046148 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:38:45.046160 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:38:45.046172 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:38:45.046183 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:38:45.046199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:38:45.046211 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:38:45.046222 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:38:45.046234 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:38:45.046246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:38:45.046257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:38:45.046269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:38:45.046281 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:38:45.046296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 00:38:45.046309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:38:45.046320 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 00:38:45.046332 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:38:45.046344 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:38:45.046355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:38:45.046367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:38:45.046379 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 00:38:45.046391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:38:45.046407 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:38:45.046447 systemd-journald[192]: Collecting audit messages is disabled. Oct 31 00:38:45.046480 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:38:45.046495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:38:45.046508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:38:45.046521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:38:45.046535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:38:45.046548 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:38:45.046565 systemd-journald[192]: Journal started Oct 31 00:38:45.046592 systemd-journald[192]: Runtime Journal (/run/log/journal/a9005e7fef5e418fae472ae4725d878d) is 6.0M, max 48.3M, 42.2M free. Oct 31 00:38:45.009895 systemd-modules-load[194]: Inserted module 'overlay' Oct 31 00:38:45.055186 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:38:45.051517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:38:45.060656 kernel: Bridge firewalling registered Oct 31 00:38:45.055601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:38:45.058465 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 31 00:38:45.064791 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:38:45.069097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:38:45.077732 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:38:45.092753 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 00:38:45.096670 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:38:45.108103 dracut-cmdline[225]: dracut-dracut-053 Oct 31 00:38:45.111299 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:38:45.116848 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:38:45.126954 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:38:45.159246 systemd-resolved[240]: Positive Trust Anchors: Oct 31 00:38:45.159268 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:38:45.159300 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:38:45.161898 systemd-resolved[240]: Defaulting to hostname 'linux'. Oct 31 00:38:45.163051 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:38:45.163493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:38:45.233647 kernel: SCSI subsystem initialized Oct 31 00:38:45.243639 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:38:45.254637 kernel: iscsi: registered transport (tcp) Oct 31 00:38:45.276178 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:38:45.276225 kernel: QLogic iSCSI HBA Driver Oct 31 00:38:45.332345 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 00:38:45.345849 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 00:38:45.380219 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:38:45.380292 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:38:45.382005 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 00:38:45.428677 kernel: raid6: avx2x4 gen() 29220 MB/s Oct 31 00:38:45.445663 kernel: raid6: avx2x2 gen() 29518 MB/s Oct 31 00:38:45.463463 kernel: raid6: avx2x1 gen() 24880 MB/s Oct 31 00:38:45.463557 kernel: raid6: using algorithm avx2x2 gen() 29518 MB/s Oct 31 00:38:45.481463 kernel: raid6: .... xor() 18937 MB/s, rmw enabled Oct 31 00:38:45.481495 kernel: raid6: using avx2x2 recovery algorithm Oct 31 00:38:45.503636 kernel: xor: automatically using best checksumming function avx Oct 31 00:38:45.667648 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 00:38:45.688248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:38:45.699773 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:38:45.749760 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 31 00:38:45.760483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:38:45.770827 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 00:38:45.790974 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Oct 31 00:38:45.837668 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:38:45.853794 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:38:45.932908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:38:45.949881 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 00:38:45.968561 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 00:38:45.974244 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 31 00:38:45.975825 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:38:45.986161 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 00:38:45.983815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:38:45.986463 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:38:45.998265 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 00:38:45.998342 kernel: GPT:9289727 != 19775487 Oct 31 00:38:45.998354 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 00:38:46.000679 kernel: GPT:9289727 != 19775487 Oct 31 00:38:46.000697 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 00:38:46.000709 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:38:46.000787 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 00:38:46.009651 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:38:46.014130 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:38:46.014363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:38:46.021837 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:38:46.026655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:38:46.031594 kernel: libata version 3.00 loaded. Oct 31 00:38:46.026873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:38:46.031553 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:38:46.043670 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 00:38:46.043738 kernel: AES CTR mode by8 optimization enabled Oct 31 00:38:46.045073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:38:46.051469 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 00:38:46.051816 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 00:38:46.051854 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 00:38:46.051246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:38:46.070141 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 00:38:46.070361 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (474) Oct 31 00:38:46.073625 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Oct 31 00:38:46.073650 kernel: scsi host0: ahci Oct 31 00:38:46.077626 kernel: scsi host1: ahci Oct 31 00:38:46.079628 kernel: scsi host2: ahci Oct 31 00:38:46.077852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:38:46.084547 kernel: scsi host3: ahci Oct 31 00:38:46.085836 kernel: scsi host4: ahci Oct 31 00:38:46.089664 kernel: scsi host5: ahci Oct 31 00:38:46.089853 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 31 00:38:46.089865 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 31 00:38:46.092498 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 31 00:38:46.092520 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 31 00:38:46.095325 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 31 00:38:46.095357 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 31 00:38:46.098838 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 00:38:46.111892 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 00:38:46.125187 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 00:38:46.129515 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 31 00:38:46.140196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:38:46.153777 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 00:38:46.157645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:38:46.157713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:38:46.161796 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:38:46.169165 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:38:46.169190 disk-uuid[565]: Primary Header is updated. Oct 31 00:38:46.169190 disk-uuid[565]: Secondary Entries is updated. Oct 31 00:38:46.169190 disk-uuid[565]: Secondary Header is updated. Oct 31 00:38:46.174543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:38:46.186789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:38:46.214552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:38:46.235216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:38:46.275139 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:38:46.409243 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 00:38:46.409331 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 00:38:46.409674 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 00:38:46.412640 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 00:38:46.412717 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 00:38:46.413636 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 00:38:46.414659 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 00:38:46.416667 kernel: ata3.00: applying bridge limits Oct 31 00:38:46.417672 kernel: ata3.00: configured for UDMA/100 Oct 31 00:38:46.418633 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 00:38:46.462335 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 00:38:46.462754 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 00:38:46.476668 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 00:38:47.173664 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:38:47.174313 disk-uuid[566]: The operation has completed successfully. Oct 31 00:38:47.216983 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:38:47.217173 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 00:38:47.233758 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 00:38:47.238151 sh[597]: Success Oct 31 00:38:47.255643 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 31 00:38:47.294875 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 00:38:47.307357 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 00:38:47.310582 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 00:38:47.325303 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 00:38:47.325352 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:38:47.325366 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 00:38:47.328384 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 00:38:47.328401 kernel: BTRFS info (device dm-0): using free space tree Oct 31 00:38:47.334163 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 00:38:47.338428 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 00:38:47.351990 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 00:38:47.356474 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 00:38:47.365452 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:38:47.365514 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:38:47.365530 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:38:47.369704 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:38:47.380310 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:38:47.383148 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:38:47.392803 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 00:38:47.407066 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 00:38:47.487165 ignition[689]: Ignition 2.19.0 Oct 31 00:38:47.487179 ignition[689]: Stage: fetch-offline Oct 31 00:38:47.487216 ignition[689]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:38:47.487226 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:38:47.487315 ignition[689]: parsed url from cmdline: "" Oct 31 00:38:47.487320 ignition[689]: no config URL provided Oct 31 00:38:47.487326 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:38:47.487335 ignition[689]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:38:47.487365 ignition[689]: op(1): [started] loading QEMU firmware config module Oct 31 00:38:47.519878 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:38:47.487370 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 00:38:47.588740 ignition[689]: op(1): [finished] loading QEMU firmware config module Oct 31 00:38:47.591159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:38:47.616625 systemd-networkd[786]: lo: Link UP Oct 31 00:38:47.616638 systemd-networkd[786]: lo: Gained carrier Oct 31 00:38:47.619110 systemd-networkd[786]: Enumeration completed Oct 31 00:38:47.619236 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:38:47.620164 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:38:47.620169 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:38:47.621111 systemd-networkd[786]: eth0: Link UP Oct 31 00:38:47.621115 systemd-networkd[786]: eth0: Gained carrier Oct 31 00:38:47.621122 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:38:47.622286 systemd[1]: Reached target network.target - Network. Oct 31 00:38:47.636653 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:38:47.698728 ignition[689]: parsing config with SHA512: e9849655d7f1f330fb21466488522a6946727377cc36e7bc6ea715bb67b3fb174c3661049bcebe887e64ea204927806659d9366d6f9bdf38dbf112ef965cd388 Oct 31 00:38:47.703380 unknown[689]: fetched base config from "system" Oct 31 00:38:47.703400 unknown[689]: fetched user config from "qemu" Oct 31 00:38:47.707065 ignition[689]: fetch-offline: fetch-offline passed Oct 31 00:38:47.708549 ignition[689]: Ignition finished successfully Oct 31 00:38:47.711332 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:38:47.713783 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:38:47.722973 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 00:38:47.741549 ignition[790]: Ignition 2.19.0 Oct 31 00:38:47.741563 ignition[790]: Stage: kargs Oct 31 00:38:47.741781 ignition[790]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:38:47.741794 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:38:47.742817 ignition[790]: kargs: kargs passed Oct 31 00:38:47.742868 ignition[790]: Ignition finished successfully Oct 31 00:38:47.752623 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 00:38:47.761791 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 00:38:47.781373 ignition[798]: Ignition 2.19.0 Oct 31 00:38:47.781389 ignition[798]: Stage: disks Oct 31 00:38:47.781582 ignition[798]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:38:47.781594 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:38:47.782410 ignition[798]: disks: disks passed Oct 31 00:38:47.782455 ignition[798]: Ignition finished successfully Oct 31 00:38:47.821088 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 00:38:47.823165 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 00:38:47.826434 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:38:47.828631 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:38:47.832149 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:38:47.856519 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:38:47.875812 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 00:38:47.905254 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 31 00:38:48.310054 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 00:38:48.334750 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 00:38:48.453663 kernel: EXT4-fs (vda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 00:38:48.455503 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 00:38:48.456521 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 00:38:48.473832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:38:48.478579 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 00:38:48.482575 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 00:38:48.492125 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Oct 31 00:38:48.492158 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:38:48.492170 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:38:48.492181 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:38:48.482672 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:38:48.498972 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:38:48.482708 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:38:48.502477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:38:48.505992 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 00:38:48.518848 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 00:38:48.558898 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:38:48.563571 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:38:48.569069 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:38:48.579270 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:38:48.680935 systemd-networkd[786]: eth0: Gained IPv6LL Oct 31 00:38:48.723733 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 00:38:48.740874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 00:38:48.745685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 00:38:48.750940 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 00:38:48.753788 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:38:48.793164 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 00:38:48.799697 ignition[929]: INFO : Ignition 2.19.0 Oct 31 00:38:48.799697 ignition[929]: INFO : Stage: mount Oct 31 00:38:48.802285 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:38:48.802285 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:38:48.806550 ignition[929]: INFO : mount: mount passed Oct 31 00:38:48.807797 ignition[929]: INFO : Ignition finished successfully Oct 31 00:38:48.811545 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 00:38:48.824816 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 00:38:48.864223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:38:48.878651 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Oct 31 00:38:48.883315 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:38:48.883370 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:38:48.883382 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:38:48.887635 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:38:48.890267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:38:48.920372 ignition[961]: INFO : Ignition 2.19.0 Oct 31 00:38:48.920372 ignition[961]: INFO : Stage: files Oct 31 00:38:48.923203 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:38:48.923203 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:38:48.927426 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:38:48.930036 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:38:48.930036 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:38:48.936885 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:38:48.939542 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:38:48.942813 unknown[961]: wrote ssh authorized keys file for user: core Oct 31 00:38:48.944710 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:38:48.947975 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:38:48.951694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 00:38:48.978017 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 00:38:49.101797 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:38:49.101797 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:38:49.108509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 31 00:38:49.554485 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 00:38:50.209716 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:38:50.209716 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 00:38:50.216466 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:38:50.220684 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:38:50.220684 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 00:38:50.220684 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 00:38:50.229636 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:38:50.234003 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:38:50.234003 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 00:38:50.234003 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:38:50.275328 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:38:50.284731 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:38:50.287358 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:38:50.287358 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:38:50.287358 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:38:50.287358 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:38:50.287358 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:38:50.287358 ignition[961]: INFO : files: files passed Oct 31 00:38:50.287358 ignition[961]: INFO : Ignition finished successfully Oct 31 00:38:50.307169 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 00:38:50.321071 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 00:38:50.322435 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 00:38:50.335156 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:38:50.335325 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 00:38:50.339377 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 00:38:50.344240 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:38:50.344240 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:38:50.353582 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:38:50.345701 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:38:50.349285 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 00:38:50.362810 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 00:38:50.394680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:38:50.394870 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 00:38:50.399565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 00:38:50.403928 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 00:38:50.408295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 00:38:50.422019 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 00:38:50.441041 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:38:50.456983 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 00:38:50.472989 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:38:50.477253 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:38:50.481550 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 00:38:50.485317 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:38:50.487306 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:38:50.492721 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 00:38:50.496709 systemd[1]: Stopped target basic.target - Basic System. Oct 31 00:38:50.500366 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 00:38:50.504579 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:38:50.508768 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 00:38:50.512897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 00:38:50.516896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:38:50.521831 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 00:38:50.525733 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 00:38:50.529500 systemd[1]: Stopped target swap.target - Swaps. Oct 31 00:38:50.532573 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:38:50.534430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:38:50.538427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:38:50.542159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:38:50.546104 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 00:38:50.548044 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:38:50.552776 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:38:50.554504 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 00:38:50.558406 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:38:50.560214 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:38:50.564740 systemd[1]: Stopped target paths.target - Path Units. Oct 31 00:38:50.567669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:38:50.571723 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:38:50.576254 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 00:38:50.579292 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 00:38:50.582396 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:38:50.583806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:38:50.587034 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:38:50.588511 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:38:50.591953 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:38:50.592109 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:38:50.598175 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:38:50.598321 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 00:38:50.612980 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 00:38:50.616837 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:38:50.618968 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:38:50.625250 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 00:38:50.628907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:38:50.631143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:38:50.634705 ignition[1015]: INFO : Ignition 2.19.0 Oct 31 00:38:50.634705 ignition[1015]: INFO : Stage: umount Oct 31 00:38:50.641566 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:38:50.641566 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:38:50.641566 ignition[1015]: INFO : umount: umount passed Oct 31 00:38:50.641566 ignition[1015]: INFO : Ignition finished successfully Oct 31 00:38:50.636024 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:38:50.636268 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:38:50.656134 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:38:50.658035 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 00:38:50.663989 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:38:50.668754 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:38:50.670710 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 00:38:50.676542 systemd[1]: Stopped target network.target - Network. Oct 31 00:38:50.680027 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:38:50.681973 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 00:38:50.686145 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:38:50.687991 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 00:38:50.691654 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:38:50.693527 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 00:38:50.697410 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 00:38:50.699396 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 00:38:50.703918 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 00:38:50.708088 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 00:38:50.713647 systemd-networkd[786]: eth0: DHCPv6 lease lost Oct 31 00:38:50.716537 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:38:50.718564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 00:38:50.723266 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:38:50.725195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 00:38:50.730460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:38:50.732043 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:38:50.746731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 00:38:50.746833 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:38:50.746899 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:38:50.751895 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:38:50.751978 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:38:50.755321 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:38:50.755388 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 00:38:50.759127 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 00:38:50.759193 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:38:50.762768 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:38:50.784413 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:38:50.786443 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:38:50.791435 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:38:50.793437 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 00:38:50.799094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:38:50.799164 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 00:38:50.804792 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:38:50.804858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:38:50.810385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:38:50.810467 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:38:50.815803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:38:50.815875 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 00:38:50.820734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:38:50.820807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:38:50.836871 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 00:38:50.840876 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:38:50.840995 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:38:50.847128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:38:50.848875 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:38:50.853377 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:38:50.855036 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 00:38:50.858664 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:38:50.860477 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 00:38:50.865662 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 00:38:50.869057 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:38:50.870644 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 00:38:50.885878 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 00:38:50.897157 systemd[1]: Switching root. Oct 31 00:38:50.936219 systemd-journald[192]: Journal stopped Oct 31 00:38:52.288680 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Oct 31 00:38:52.288755 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:38:52.288779 kernel: SELinux: policy capability open_perms=1 Oct 31 00:38:52.288791 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:38:52.288803 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:38:52.288822 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:38:52.288838 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:38:52.288850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:38:52.288862 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:38:52.288874 kernel: audit: type=1403 audit(1761871131.326:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 00:38:52.288896 systemd[1]: Successfully loaded SELinux policy in 54.372ms. Oct 31 00:38:52.288934 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.606ms. Oct 31 00:38:52.288952 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:38:52.288968 systemd[1]: Detected virtualization kvm. Oct 31 00:38:52.288982 systemd[1]: Detected architecture x86-64. Oct 31 00:38:52.288994 systemd[1]: Detected first boot. Oct 31 00:38:52.289006 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:38:52.289018 zram_generator::config[1061]: No configuration found. Oct 31 00:38:52.289031 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:38:52.289050 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 00:38:52.289064 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 00:38:52.289076 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 00:38:52.289089 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 00:38:52.289104 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 00:38:52.289116 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 00:38:52.289128 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 00:38:52.289140 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 00:38:52.289159 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 00:38:52.289171 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 00:38:52.289183 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 00:38:52.289196 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:38:52.289209 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:38:52.289221 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 00:38:52.289233 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 00:38:52.289246 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 00:38:52.289259 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:38:52.289277 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 00:38:52.289290 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:38:52.289304 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 00:38:52.289316 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 00:38:52.289331 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 00:38:52.289352 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 00:38:52.289364 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:38:52.289376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:38:52.289394 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:38:52.289406 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:38:52.289419 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 00:38:52.289431 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 00:38:52.289443 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:38:52.289454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:38:52.289466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:38:52.289479 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 00:38:52.289491 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 00:38:52.289509 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 00:38:52.289521 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 00:38:52.289533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:38:52.289546 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 00:38:52.289558 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 00:38:52.289570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 00:38:52.289583 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:38:52.289596 systemd[1]: Reached target machines.target - Containers. Oct 31 00:38:52.289629 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 00:38:52.289643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:38:52.289656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:38:52.289671 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 00:38:52.289684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:38:52.289696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:38:52.289708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:38:52.289720 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 00:38:52.289732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:38:52.289751 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:38:52.289763 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 00:38:52.289775 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 00:38:52.289787 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 00:38:52.289799 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 00:38:52.289811 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:38:52.289823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:38:52.289835 kernel: loop: module loaded Oct 31 00:38:52.289867 systemd-journald[1124]: Collecting audit messages is disabled. Oct 31 00:38:52.289901 systemd-journald[1124]: Journal started Oct 31 00:38:52.289936 systemd-journald[1124]: Runtime Journal (/run/log/journal/a9005e7fef5e418fae472ae4725d878d) is 6.0M, max 48.3M, 42.2M free. Oct 31 00:38:51.967580 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:38:51.988632 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 00:38:51.989214 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 00:38:52.294643 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 00:38:52.294731 kernel: fuse: init (API version 7.39) Oct 31 00:38:52.303960 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 00:38:52.311228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:38:52.311320 systemd[1]: verity-setup.service: Deactivated successfully. Oct 31 00:38:52.312955 systemd[1]: Stopped verity-setup.service. Oct 31 00:38:52.321443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:38:52.321540 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:38:52.324168 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 00:38:52.326088 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 00:38:52.328094 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 00:38:52.330181 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 00:38:52.332638 kernel: ACPI: bus type drm_connector registered Oct 31 00:38:52.333368 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 00:38:52.335768 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 00:38:52.337724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:38:52.340419 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:38:52.340629 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 00:38:52.343016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:38:52.343203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:38:52.353328 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:38:52.353536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:38:52.355677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:38:52.355863 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:38:52.358334 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:38:52.358520 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 00:38:52.360667 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:38:52.360846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:38:52.363019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:38:52.365552 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 00:38:52.368141 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 00:38:52.383990 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 00:38:52.396822 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 00:38:52.400241 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 00:38:52.402036 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:38:52.402074 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:38:52.404762 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 00:38:52.423219 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 00:38:52.426718 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 00:38:52.428780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:38:52.430594 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 00:38:52.436147 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 00:38:52.438310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:38:52.439803 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 00:38:52.443968 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:38:52.460195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:38:52.464140 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 00:38:52.468707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:38:52.471460 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 00:38:52.474048 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 00:38:52.477498 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 00:38:52.569476 systemd-journald[1124]: Time spent on flushing to /var/log/journal/a9005e7fef5e418fae472ae4725d878d is 48.657ms for 995 entries. Oct 31 00:38:52.569476 systemd-journald[1124]: System Journal (/var/log/journal/a9005e7fef5e418fae472ae4725d878d) is 8.0M, max 195.6M, 187.6M free. Oct 31 00:38:52.640035 systemd-journald[1124]: Received client request to flush runtime journal. Oct 31 00:38:52.640177 kernel: loop0: detected capacity change from 0 to 142488 Oct 31 00:38:52.640195 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:38:52.584225 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 00:38:52.587015 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 00:38:52.590716 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 00:38:52.600854 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 00:38:52.610039 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 00:38:52.631465 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 00:38:52.633860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:38:52.637069 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 31 00:38:52.649437 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 00:38:52.666960 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:38:52.667805 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 00:38:52.673727 kernel: loop1: detected capacity change from 0 to 219144 Oct 31 00:38:52.681570 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 00:38:52.695251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:38:52.706820 kernel: loop2: detected capacity change from 0 to 140768 Oct 31 00:38:52.754102 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Oct 31 00:38:52.754630 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Oct 31 00:38:52.756662 kernel: loop3: detected capacity change from 0 to 142488 Oct 31 00:38:52.798128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:38:52.814635 kernel: loop4: detected capacity change from 0 to 219144 Oct 31 00:38:52.822730 kernel: loop5: detected capacity change from 0 to 140768 Oct 31 00:38:52.832403 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 31 00:38:52.833203 (sd-merge)[1198]: Merged extensions into '/usr'. Oct 31 00:38:52.845917 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 00:38:52.845938 systemd[1]: Reloading... Oct 31 00:38:52.940636 zram_generator::config[1225]: No configuration found. Oct 31 00:38:53.157310 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:38:53.160387 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:38:53.214078 systemd[1]: Reloading finished in 367 ms. Oct 31 00:38:53.246542 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 00:38:53.300480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 00:38:53.312835 systemd[1]: Starting ensure-sysext.service... Oct 31 00:38:53.315877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:38:53.322697 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Oct 31 00:38:53.322709 systemd[1]: Reloading... Oct 31 00:38:53.348917 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:38:53.349333 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 00:38:53.350446 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:38:53.350785 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Oct 31 00:38:53.350869 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Oct 31 00:38:53.354520 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:38:53.354533 systemd-tmpfiles[1263]: Skipping /boot Oct 31 00:38:53.370160 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:38:53.370833 systemd-tmpfiles[1263]: Skipping /boot Oct 31 00:38:53.387646 zram_generator::config[1295]: No configuration found. Oct 31 00:38:53.530825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:38:53.585980 systemd[1]: Reloading finished in 262 ms. Oct 31 00:38:53.605867 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 00:38:53.617271 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:38:53.628528 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:38:53.642370 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 00:38:53.647118 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 00:38:53.652763 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:38:53.660842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:38:53.675957 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 00:38:53.683346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:38:53.683552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:38:53.685852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:38:53.714084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:38:53.716042 augenrules[1350]: No rules Oct 31 00:38:53.726464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:38:53.728666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:38:53.732073 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Oct 31 00:38:53.737537 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 00:38:53.740632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:38:53.742394 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:38:53.757780 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 00:38:53.760796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:38:53.761033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:38:53.763634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:38:53.763827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:38:53.766462 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:38:53.766670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:38:53.775358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:38:53.787202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:38:53.788971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:38:53.789193 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:38:53.793849 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 00:38:53.798565 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 00:38:53.803983 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 00:38:53.820861 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 00:38:53.824452 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 00:38:53.841429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:38:53.841755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:38:53.865998 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:38:53.870123 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:38:53.874809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:38:53.879924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:38:53.883971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:38:53.884130 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:38:53.884216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:38:53.885282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:38:53.885470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:38:53.887905 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:38:53.888094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:38:53.892787 systemd[1]: Finished ensure-sysext.service. Oct 31 00:38:53.906847 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 00:38:53.908986 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 00:38:53.913226 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:38:53.916233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:38:53.918670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:38:53.918698 systemd-resolved[1338]: Positive Trust Anchors: Oct 31 00:38:53.918712 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:38:53.918745 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:38:53.918957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:38:53.923518 systemd-resolved[1338]: Defaulting to hostname 'linux'. Oct 31 00:38:53.926420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:38:53.928575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:38:53.930584 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:38:53.930772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:38:53.935626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 00:38:53.949647 kernel: ACPI: button: Power Button [PWRF] Oct 31 00:38:53.944587 systemd-networkd[1368]: lo: Link UP Oct 31 00:38:53.944593 systemd-networkd[1368]: lo: Gained carrier Oct 31 00:38:53.946476 systemd-networkd[1368]: Enumeration completed Oct 31 00:38:53.946624 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:38:53.947666 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:38:53.947675 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:38:53.948922 systemd-networkd[1368]: eth0: Link UP Oct 31 00:38:53.948927 systemd-networkd[1368]: eth0: Gained carrier Oct 31 00:38:53.948941 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:38:53.950887 systemd[1]: Reached target network.target - Network. Oct 31 00:38:53.957917 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 00:38:53.962384 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:38:53.968078 systemd-networkd[1368]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:38:53.975167 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 31 00:38:53.976025 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 00:38:53.994486 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 00:38:53.995262 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 00:38:54.000569 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1365) Oct 31 00:38:54.035469 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 00:38:54.038633 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 00:38:54.039894 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 00:38:54.041941 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 00:38:54.042047 systemd-timesyncd[1401]: Initial clock synchronization to Fri 2025-10-31 00:38:53.901999 UTC. Oct 31 00:38:54.186668 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 00:38:54.191500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:38:54.198026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:38:54.205584 kernel: kvm_amd: TSC scaling supported Oct 31 00:38:54.205692 kernel: kvm_amd: Nested Virtualization enabled Oct 31 00:38:54.205708 kernel: kvm_amd: Nested Paging enabled Oct 31 00:38:54.207291 kernel: kvm_amd: LBR virtualization supported Oct 31 00:38:54.207329 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 31 00:38:54.207329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 00:38:54.208712 kernel: kvm_amd: Virtual GIF supported Oct 31 00:38:54.236633 kernel: EDAC MC: Ver: 3.0.0 Oct 31 00:38:54.245466 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 00:38:54.271875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:38:54.280638 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 00:38:54.298232 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 00:38:54.309620 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:38:54.349050 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 00:38:54.351742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:38:54.353551 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:38:54.355423 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 00:38:54.357470 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 00:38:54.360084 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 00:38:54.362047 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 00:38:54.364102 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 00:38:54.366139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:38:54.366172 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:38:54.367644 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:38:54.370390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 00:38:54.374238 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 00:38:54.381773 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 00:38:54.385506 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 00:38:54.387980 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 00:38:54.389842 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:38:54.391426 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:38:54.391537 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:38:54.391562 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:38:54.392899 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 00:38:54.395800 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 00:38:54.399207 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:38:54.400719 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 00:38:54.404817 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 00:38:54.406768 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 00:38:54.408954 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 00:38:54.410559 jq[1436]: false Oct 31 00:38:54.414766 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 00:38:54.421071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 00:38:54.428815 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 00:38:54.433465 extend-filesystems[1437]: Found loop3 Oct 31 00:38:54.435998 extend-filesystems[1437]: Found loop4 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found loop5 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found sr0 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda1 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda2 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda3 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found usr Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda4 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda6 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda7 Oct 31 00:38:54.441280 extend-filesystems[1437]: Found vda9 Oct 31 00:38:54.441280 extend-filesystems[1437]: Checking size of /dev/vda9 Oct 31 00:38:54.491811 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 00:38:54.491875 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1370) Oct 31 00:38:54.439215 dbus-daemon[1435]: [system] SELinux support is enabled Oct 31 00:38:54.437472 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 00:38:54.492283 extend-filesystems[1437]: Resized partition /dev/vda9 Oct 31 00:38:54.439827 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:38:54.496511 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Oct 31 00:38:54.440486 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 00:38:54.515055 update_engine[1450]: I20251031 00:38:54.510104 1450 main.cc:92] Flatcar Update Engine starting Oct 31 00:38:54.515055 update_engine[1450]: I20251031 00:38:54.511464 1450 update_check_scheduler.cc:74] Next update check in 10m56s Oct 31 00:38:54.442930 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 00:38:54.515406 jq[1453]: true Oct 31 00:38:54.448357 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 00:38:54.458011 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 00:38:54.471563 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 00:38:54.489959 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:38:54.490214 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 00:38:54.490602 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:38:54.490825 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 00:38:54.497059 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:38:54.497304 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 00:38:54.509601 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 00:38:54.519572 jq[1462]: true Oct 31 00:38:54.537082 systemd[1]: Started update-engine.service - Update Engine. Oct 31 00:38:54.541317 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:38:54.541350 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 00:38:54.554082 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:38:54.554113 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 00:38:54.565805 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 00:38:54.602396 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:38:54.631692 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 00:38:54.638791 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 00:38:54.650274 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:38:54.650553 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 00:38:54.653880 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 00:38:54.705645 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 00:38:54.711733 tar[1461]: linux-amd64/LICENSE Oct 31 00:38:54.713700 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 00:38:54.726957 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 00:38:54.730355 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 00:38:54.759738 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 00:38:54.946274 tar[1461]: linux-amd64/helm Oct 31 00:38:54.824532 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:38:54.946776 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 00:38:54.946808 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 00:38:54.947443 systemd-logind[1448]: New seat seat0. Oct 31 00:38:54.949436 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 00:38:54.951841 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 00:38:54.951841 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 00:38:54.951841 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 00:38:54.961177 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Oct 31 00:38:54.956263 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:38:54.956496 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 00:38:54.964661 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:38:54.966372 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 00:38:54.970917 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 00:38:55.008749 containerd[1463]: time="2025-10-31T00:38:55.008591890Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 00:38:55.037764 containerd[1463]: time="2025-10-31T00:38:55.037692686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:38:55.040635 containerd[1463]: time="2025-10-31T00:38:55.039968538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:38:55.040635 containerd[1463]: time="2025-10-31T00:38:55.040012115Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:38:55.040635 containerd[1463]: time="2025-10-31T00:38:55.040031015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:38:55.040635 containerd[1463]: time="2025-10-31T00:38:55.040270886Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 00:38:55.040635 containerd[1463]: time="2025-10-31T00:38:55.040297859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 00:38:55.040635 containerd[1463]: time="2025-10-31T00:38:55.040381470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:38:55.040635 containerd[1463]: time="2025-10-31T00:38:55.040396345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:38:55.040921 containerd[1463]: time="2025-10-31T00:38:55.040896935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:38:55.040991 containerd[1463]: time="2025-10-31T00:38:55.040975586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:38:55.041054 containerd[1463]: time="2025-10-31T00:38:55.041038695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:38:55.041409 containerd[1463]: time="2025-10-31T00:38:55.041385606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:38:55.041630 containerd[1463]: time="2025-10-31T00:38:55.041573709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:38:55.042035 containerd[1463]: time="2025-10-31T00:38:55.042009972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:38:55.042318 containerd[1463]: time="2025-10-31T00:38:55.042289079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:38:55.042386 containerd[1463]: time="2025-10-31T00:38:55.042370684Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:38:55.042557 containerd[1463]: time="2025-10-31T00:38:55.042539602Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:38:55.042716 containerd[1463]: time="2025-10-31T00:38:55.042689865Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:38:55.048693 containerd[1463]: time="2025-10-31T00:38:55.048659700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:38:55.048754 containerd[1463]: time="2025-10-31T00:38:55.048713289Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:38:55.048754 containerd[1463]: time="2025-10-31T00:38:55.048732347Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 00:38:55.048754 containerd[1463]: time="2025-10-31T00:38:55.048749011Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 00:38:55.048823 containerd[1463]: time="2025-10-31T00:38:55.048765244Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:38:55.048951 containerd[1463]: time="2025-10-31T00:38:55.048932764Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:38:55.049278 containerd[1463]: time="2025-10-31T00:38:55.049253275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:38:55.049401 containerd[1463]: time="2025-10-31T00:38:55.049384914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 00:38:55.049429 containerd[1463]: time="2025-10-31T00:38:55.049402495Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 00:38:55.049429 containerd[1463]: time="2025-10-31T00:38:55.049414623Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 00:38:55.049475 containerd[1463]: time="2025-10-31T00:38:55.049428532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049475 containerd[1463]: time="2025-10-31T00:38:55.049440699Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049475 containerd[1463]: time="2025-10-31T00:38:55.049452856Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049475 containerd[1463]: time="2025-10-31T00:38:55.049471106Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049571 containerd[1463]: time="2025-10-31T00:38:55.049485615Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049571 containerd[1463]: time="2025-10-31T00:38:55.049498599Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049571 containerd[1463]: time="2025-10-31T00:38:55.049510816Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049571 containerd[1463]: time="2025-10-31T00:38:55.049521289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:38:55.049571 containerd[1463]: time="2025-10-31T00:38:55.049539805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049571 containerd[1463]: time="2025-10-31T00:38:55.049552021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049571 containerd[1463]: time="2025-10-31T00:38:55.049564187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049576492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049587429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049618407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049631223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049642878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049654739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049668304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049678915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049689911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049700966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049714471Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049733834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049749249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.049766 containerd[1463]: time="2025-10-31T00:38:55.049759201Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049801854Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049817938Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049829564Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049840687Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049849664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049861379Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049870878Z" level=info msg="NRI interface is disabled by configuration." Oct 31 00:38:55.050097 containerd[1463]: time="2025-10-31T00:38:55.049880012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:38:55.050294 containerd[1463]: time="2025-10-31T00:38:55.050121055Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:38:55.050294 containerd[1463]: time="2025-10-31T00:38:55.050170204Z" level=info msg="Connect containerd service" Oct 31 00:38:55.050294 containerd[1463]: time="2025-10-31T00:38:55.050207768Z" level=info msg="using legacy CRI server" Oct 31 00:38:55.050294 containerd[1463]: time="2025-10-31T00:38:55.050214452Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 00:38:55.050294 containerd[1463]: time="2025-10-31T00:38:55.050298438Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:38:55.050907 containerd[1463]: time="2025-10-31T00:38:55.050884187Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:38:55.051272 containerd[1463]: time="2025-10-31T00:38:55.051245471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:38:55.051304 containerd[1463]: time="2025-10-31T00:38:55.051298046Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:38:55.051460 containerd[1463]: time="2025-10-31T00:38:55.051421220Z" level=info msg="Start subscribing containerd event" Oct 31 00:38:55.051460 containerd[1463]: time="2025-10-31T00:38:55.051453606Z" level=info msg="Start recovering state" Oct 31 00:38:55.051532 containerd[1463]: time="2025-10-31T00:38:55.051504537Z" level=info msg="Start event monitor" Oct 31 00:38:55.051532 containerd[1463]: time="2025-10-31T00:38:55.051513761Z" level=info msg="Start snapshots syncer" Oct 31 00:38:55.051532 containerd[1463]: time="2025-10-31T00:38:55.051521577Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:38:55.051532 containerd[1463]: time="2025-10-31T00:38:55.051528989Z" level=info msg="Start streaming server" Oct 31 00:38:55.051693 containerd[1463]: time="2025-10-31T00:38:55.051596841Z" level=info msg="containerd successfully booted in 0.045087s" Oct 31 00:38:55.051718 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 00:38:55.273081 systemd-networkd[1368]: eth0: Gained IPv6LL Oct 31 00:38:55.280425 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 00:38:55.283567 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 00:38:55.293921 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 00:38:55.297450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:38:55.301314 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 00:38:55.335410 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 00:38:55.335738 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 00:38:55.338452 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 00:38:55.340189 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 00:38:55.350700 tar[1461]: linux-amd64/README.md Oct 31 00:38:55.366472 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 00:38:56.347045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:38:56.349540 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 00:38:56.352353 systemd[1]: Startup finished in 1.187s (kernel) + 6.543s (initrd) + 5.078s (userspace) = 12.809s. Oct 31 00:38:56.445076 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:38:57.096492 kubelet[1547]: E1031 00:38:57.096385 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:38:57.101842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:38:57.102057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:38:57.102487 systemd[1]: kubelet.service: Consumed 1.667s CPU time. Oct 31 00:38:58.396500 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 00:38:58.398260 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:55566.service - OpenSSH per-connection server daemon (10.0.0.1:55566). Oct 31 00:38:58.453201 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 55566 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:38:58.455717 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:38:58.466817 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 00:38:58.480864 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 00:38:58.482914 systemd-logind[1448]: New session 1 of user core. Oct 31 00:38:58.495034 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 00:38:58.498042 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 00:38:58.509096 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:38:58.628841 systemd[1564]: Queued start job for default target default.target. Oct 31 00:38:58.640078 systemd[1564]: Created slice app.slice - User Application Slice. Oct 31 00:38:58.640108 systemd[1564]: Reached target paths.target - Paths. Oct 31 00:38:58.640122 systemd[1564]: Reached target timers.target - Timers. Oct 31 00:38:58.641864 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 00:38:58.657121 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 00:38:58.657289 systemd[1564]: Reached target sockets.target - Sockets. Oct 31 00:38:58.657313 systemd[1564]: Reached target basic.target - Basic System. Oct 31 00:38:58.657361 systemd[1564]: Reached target default.target - Main User Target. Oct 31 00:38:58.657401 systemd[1564]: Startup finished in 139ms. Oct 31 00:38:58.657764 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 00:38:58.659534 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 00:38:58.725008 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:55580.service - OpenSSH per-connection server daemon (10.0.0.1:55580). Oct 31 00:38:58.773401 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 55580 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:38:58.775686 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:38:58.781118 systemd-logind[1448]: New session 2 of user core. Oct 31 00:38:58.790977 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 00:38:58.849378 sshd[1575]: pam_unix(sshd:session): session closed for user core Oct 31 00:38:58.858726 systemd[1]: sshd@1-10.0.0.63:22-10.0.0.1:55580.service: Deactivated successfully. Oct 31 00:38:58.861145 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 00:38:58.863452 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Oct 31 00:38:58.880295 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:55586.service - OpenSSH per-connection server daemon (10.0.0.1:55586). Oct 31 00:38:58.881572 systemd-logind[1448]: Removed session 2. Oct 31 00:38:58.915180 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:38:58.916939 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:38:58.921788 systemd-logind[1448]: New session 3 of user core. Oct 31 00:38:58.934831 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 00:38:58.986270 sshd[1582]: pam_unix(sshd:session): session closed for user core Oct 31 00:38:58.998762 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:55586.service: Deactivated successfully. Oct 31 00:38:59.000763 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 00:38:59.002649 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Oct 31 00:38:59.019179 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:55594.service - OpenSSH per-connection server daemon (10.0.0.1:55594). Oct 31 00:38:59.020654 systemd-logind[1448]: Removed session 3. Oct 31 00:38:59.056242 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 55594 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:38:59.058537 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:38:59.063901 systemd-logind[1448]: New session 4 of user core. Oct 31 00:38:59.077865 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 00:38:59.137381 sshd[1589]: pam_unix(sshd:session): session closed for user core Oct 31 00:38:59.154354 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:55594.service: Deactivated successfully. Oct 31 00:38:59.156459 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:38:59.158276 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:38:59.167987 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:55604.service - OpenSSH per-connection server daemon (10.0.0.1:55604). Oct 31 00:38:59.169537 systemd-logind[1448]: Removed session 4. Oct 31 00:38:59.207753 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 55604 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:38:59.210035 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:38:59.215726 systemd-logind[1448]: New session 5 of user core. Oct 31 00:38:59.225757 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 00:38:59.297769 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 00:38:59.298240 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:38:59.316532 sudo[1599]: pam_unix(sudo:session): session closed for user root Oct 31 00:38:59.319313 sshd[1596]: pam_unix(sshd:session): session closed for user core Oct 31 00:38:59.329270 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:55604.service: Deactivated successfully. Oct 31 00:38:59.332816 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:38:59.334925 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:38:59.346006 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:55620.service - OpenSSH per-connection server daemon (10.0.0.1:55620). Oct 31 00:38:59.347204 systemd-logind[1448]: Removed session 5. Oct 31 00:38:59.382070 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 55620 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:38:59.384367 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:38:59.389180 systemd-logind[1448]: New session 6 of user core. Oct 31 00:38:59.399900 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 00:38:59.460064 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 00:38:59.460547 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:38:59.467584 sudo[1608]: pam_unix(sudo:session): session closed for user root Oct 31 00:38:59.476477 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 00:38:59.476880 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:38:59.499101 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 00:38:59.501575 auditctl[1611]: No rules Oct 31 00:38:59.503053 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 00:38:59.503409 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 00:38:59.505659 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:38:59.542694 augenrules[1629]: No rules Oct 31 00:38:59.543906 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:38:59.545502 sudo[1607]: pam_unix(sudo:session): session closed for user root Oct 31 00:38:59.547558 sshd[1604]: pam_unix(sshd:session): session closed for user core Oct 31 00:38:59.559900 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:55620.service: Deactivated successfully. Oct 31 00:38:59.561848 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:38:59.563260 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:38:59.572878 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:55622.service - OpenSSH per-connection server daemon (10.0.0.1:55622). Oct 31 00:38:59.573781 systemd-logind[1448]: Removed session 6. Oct 31 00:38:59.608988 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 55622 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:38:59.611351 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:38:59.616321 systemd-logind[1448]: New session 7 of user core. Oct 31 00:38:59.625884 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 00:38:59.681783 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:38:59.682287 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:39:00.228857 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 00:39:00.229124 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 00:39:01.581131 dockerd[1659]: time="2025-10-31T00:39:01.581003485Z" level=info msg="Starting up" Oct 31 00:39:02.310789 dockerd[1659]: time="2025-10-31T00:39:02.310700049Z" level=info msg="Loading containers: start." Oct 31 00:39:02.433633 kernel: Initializing XFRM netlink socket Oct 31 00:39:02.552958 systemd-networkd[1368]: docker0: Link UP Oct 31 00:39:02.587241 dockerd[1659]: time="2025-10-31T00:39:02.587022873Z" level=info msg="Loading containers: done." Oct 31 00:39:02.612253 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1168751413-merged.mount: Deactivated successfully. Oct 31 00:39:02.615006 dockerd[1659]: time="2025-10-31T00:39:02.614944296Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:39:02.615157 dockerd[1659]: time="2025-10-31T00:39:02.615130033Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 00:39:02.615371 dockerd[1659]: time="2025-10-31T00:39:02.615339203Z" level=info msg="Daemon has completed initialization" Oct 31 00:39:02.677182 dockerd[1659]: time="2025-10-31T00:39:02.677081389Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:39:02.677431 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 00:39:03.576771 containerd[1463]: time="2025-10-31T00:39:03.576703626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 31 00:39:04.881101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230984868.mount: Deactivated successfully. Oct 31 00:39:06.085551 containerd[1463]: time="2025-10-31T00:39:06.085459710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:06.087413 containerd[1463]: time="2025-10-31T00:39:06.087359543Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 31 00:39:06.088233 containerd[1463]: time="2025-10-31T00:39:06.088177370Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:06.091561 containerd[1463]: time="2025-10-31T00:39:06.091517836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:06.093044 containerd[1463]: time="2025-10-31T00:39:06.092972340Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.516181498s" Oct 31 00:39:06.093105 containerd[1463]: time="2025-10-31T00:39:06.093041511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 31 00:39:06.093863 containerd[1463]: time="2025-10-31T00:39:06.093835179Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 31 00:39:07.214626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:39:07.223949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:39:07.518638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:39:07.528291 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:39:08.041780 kubelet[1875]: E1031 00:39:08.041544 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:39:08.048729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:39:08.049218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:39:08.460820 containerd[1463]: time="2025-10-31T00:39:08.460670351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:08.461625 containerd[1463]: time="2025-10-31T00:39:08.461524738Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 31 00:39:08.462995 containerd[1463]: time="2025-10-31T00:39:08.462961522Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:08.466219 containerd[1463]: time="2025-10-31T00:39:08.466175697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:08.467375 containerd[1463]: time="2025-10-31T00:39:08.467324139Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.373458096s" Oct 31 00:39:08.467431 containerd[1463]: time="2025-10-31T00:39:08.467375038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 31 00:39:08.468099 containerd[1463]: time="2025-10-31T00:39:08.468067152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 31 00:39:10.800008 containerd[1463]: time="2025-10-31T00:39:10.799892613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:10.810963 containerd[1463]: time="2025-10-31T00:39:10.810890000Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 31 00:39:10.816644 containerd[1463]: time="2025-10-31T00:39:10.816555229Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:10.821418 containerd[1463]: time="2025-10-31T00:39:10.821381512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:10.822957 containerd[1463]: time="2025-10-31T00:39:10.822916399Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 2.354812707s" Oct 31 00:39:10.822957 containerd[1463]: time="2025-10-31T00:39:10.822951303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 31 00:39:10.824433 containerd[1463]: time="2025-10-31T00:39:10.824380960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 31 00:39:12.387517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466749904.mount: Deactivated successfully. Oct 31 00:39:13.138675 containerd[1463]: time="2025-10-31T00:39:13.138589005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:13.139218 containerd[1463]: time="2025-10-31T00:39:13.139177559Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 31 00:39:13.140303 containerd[1463]: time="2025-10-31T00:39:13.140275054Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:13.142660 containerd[1463]: time="2025-10-31T00:39:13.142632902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:13.143418 containerd[1463]: time="2025-10-31T00:39:13.143375302Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.31894792s" Oct 31 00:39:13.143453 containerd[1463]: time="2025-10-31T00:39:13.143415523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 31 00:39:13.143968 containerd[1463]: time="2025-10-31T00:39:13.143938168Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 31 00:39:13.767881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696687651.mount: Deactivated successfully. Oct 31 00:39:15.181639 containerd[1463]: time="2025-10-31T00:39:15.181561225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:15.182626 containerd[1463]: time="2025-10-31T00:39:15.182557199Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 31 00:39:15.184207 containerd[1463]: time="2025-10-31T00:39:15.184161269Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:15.216628 containerd[1463]: time="2025-10-31T00:39:15.216525003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:15.217591 containerd[1463]: time="2025-10-31T00:39:15.217525260Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.07355613s" Oct 31 00:39:15.217591 containerd[1463]: time="2025-10-31T00:39:15.217585610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 31 00:39:15.218195 containerd[1463]: time="2025-10-31T00:39:15.218170600Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 31 00:39:15.718891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455934887.mount: Deactivated successfully. Oct 31 00:39:15.725115 containerd[1463]: time="2025-10-31T00:39:15.725054582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:15.725945 containerd[1463]: time="2025-10-31T00:39:15.725872298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 31 00:39:15.727147 containerd[1463]: time="2025-10-31T00:39:15.727098995Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:15.729671 containerd[1463]: time="2025-10-31T00:39:15.729601884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:15.730441 containerd[1463]: time="2025-10-31T00:39:15.730394212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 512.153104ms" Oct 31 00:39:15.730506 containerd[1463]: time="2025-10-31T00:39:15.730445616Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 31 00:39:15.731094 containerd[1463]: time="2025-10-31T00:39:15.731006660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 31 00:39:18.236254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 00:39:18.245020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:39:18.831858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:39:18.837390 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:39:18.929766 kubelet[2003]: E1031 00:39:18.929686 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:39:18.934823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:39:18.935059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:39:20.288378 containerd[1463]: time="2025-10-31T00:39:20.288299495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:20.289288 containerd[1463]: time="2025-10-31T00:39:20.289212922Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 31 00:39:20.290782 containerd[1463]: time="2025-10-31T00:39:20.290728826Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:20.294206 containerd[1463]: time="2025-10-31T00:39:20.294161897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:20.295472 containerd[1463]: time="2025-10-31T00:39:20.295427686Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.564382915s" Oct 31 00:39:20.295472 containerd[1463]: time="2025-10-31T00:39:20.295469209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 31 00:39:23.696600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:39:23.714244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:39:23.755875 systemd[1]: Reloading requested from client PID 2044 ('systemctl') (unit session-7.scope)... Oct 31 00:39:23.755897 systemd[1]: Reloading... Oct 31 00:39:23.869821 zram_generator::config[2089]: No configuration found. Oct 31 00:39:24.245619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:39:24.329402 systemd[1]: Reloading finished in 573 ms. Oct 31 00:39:24.516847 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 00:39:24.516993 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 00:39:24.517369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:39:24.530949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:39:25.640012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:39:25.646397 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:39:26.319100 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:39:26.319100 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:39:26.319591 kubelet[2130]: I1031 00:39:26.319168 2130 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:39:27.297513 kubelet[2130]: I1031 00:39:27.297442 2130 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 00:39:27.297513 kubelet[2130]: I1031 00:39:27.297490 2130 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:39:27.297725 kubelet[2130]: I1031 00:39:27.297537 2130 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 00:39:27.297725 kubelet[2130]: I1031 00:39:27.297551 2130 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:39:27.297836 kubelet[2130]: I1031 00:39:27.297815 2130 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:39:27.403447 kubelet[2130]: E1031 00:39:27.403397 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 00:39:27.403447 kubelet[2130]: I1031 00:39:27.403402 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:39:27.407244 kubelet[2130]: E1031 00:39:27.407203 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:39:27.407310 kubelet[2130]: I1031 00:39:27.407270 2130 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 00:39:27.412787 kubelet[2130]: I1031 00:39:27.412765 2130 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 00:39:27.413152 kubelet[2130]: I1031 00:39:27.413115 2130 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:39:27.413357 kubelet[2130]: I1031 00:39:27.413142 2130 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:39:27.413484 kubelet[2130]: I1031 00:39:27.413373 2130 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:39:27.413484 kubelet[2130]: I1031 00:39:27.413386 2130 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 00:39:27.413662 kubelet[2130]: I1031 00:39:27.413642 2130 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 00:39:27.566952 kubelet[2130]: I1031 00:39:27.566771 2130 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:39:27.619478 kubelet[2130]: I1031 00:39:27.619427 2130 kubelet.go:475] "Attempting to sync node with API server" Oct 31 00:39:27.619478 kubelet[2130]: I1031 00:39:27.619471 2130 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:39:27.619732 kubelet[2130]: I1031 00:39:27.619523 2130 kubelet.go:387] "Adding apiserver pod source" Oct 31 00:39:27.619732 kubelet[2130]: I1031 00:39:27.619558 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:39:27.620413 kubelet[2130]: E1031 00:39:27.620347 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:39:27.620638 kubelet[2130]: E1031 00:39:27.620571 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:39:27.653000 kubelet[2130]: I1031 00:39:27.652935 2130 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:39:27.653572 kubelet[2130]: I1031 00:39:27.653538 2130 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:39:27.653572 kubelet[2130]: I1031 00:39:27.653571 2130 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 00:39:27.653729 kubelet[2130]: W1031 00:39:27.653695 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:39:27.657382 kubelet[2130]: I1031 00:39:27.656888 2130 server.go:1262] "Started kubelet" Oct 31 00:39:27.660354 kubelet[2130]: I1031 00:39:27.660094 2130 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:39:27.660408 kubelet[2130]: I1031 00:39:27.660387 2130 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 00:39:27.660838 kubelet[2130]: I1031 00:39:27.660809 2130 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:39:27.661165 kubelet[2130]: I1031 00:39:27.661142 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:39:27.662086 kubelet[2130]: I1031 00:39:27.662065 2130 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 00:39:27.664651 kubelet[2130]: I1031 00:39:27.663245 2130 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:39:27.664651 kubelet[2130]: E1031 00:39:27.663396 2130 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:39:27.664651 kubelet[2130]: I1031 00:39:27.663790 2130 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 00:39:27.664651 kubelet[2130]: I1031 00:39:27.663876 2130 reconciler.go:29] "Reconciler: start to sync state" Oct 31 00:39:27.664651 kubelet[2130]: E1031 00:39:27.664243 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:39:27.664651 kubelet[2130]: E1031 00:39:27.664311 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" Oct 31 00:39:27.664651 kubelet[2130]: I1031 00:39:27.664341 2130 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:39:27.666219 kubelet[2130]: E1031 00:39:27.664492 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736c7e00305d4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:39:27.656844623 +0000 UTC m=+2.005909142,LastTimestamp:2025-10-31 00:39:27.656844623 +0000 UTC m=+2.005909142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:39:27.666516 kubelet[2130]: I1031 00:39:27.666480 2130 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:39:27.666516 kubelet[2130]: I1031 00:39:27.666499 2130 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:39:27.668189 kubelet[2130]: E1031 00:39:27.668151 2130 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:39:27.668849 kubelet[2130]: I1031 00:39:27.668823 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:39:27.675664 kubelet[2130]: I1031 00:39:27.675264 2130 server.go:310] "Adding debug handlers to kubelet server" Oct 31 00:39:27.691057 kubelet[2130]: I1031 00:39:27.691024 2130 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:39:27.691057 kubelet[2130]: I1031 00:39:27.691042 2130 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:39:27.691057 kubelet[2130]: I1031 00:39:27.691063 2130 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:39:27.692693 kubelet[2130]: I1031 00:39:27.692645 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 00:39:27.694241 kubelet[2130]: I1031 00:39:27.694199 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 00:39:27.694301 kubelet[2130]: I1031 00:39:27.694254 2130 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 00:39:27.694356 kubelet[2130]: I1031 00:39:27.694306 2130 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 00:39:27.694386 kubelet[2130]: E1031 00:39:27.694362 2130 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:39:27.695202 kubelet[2130]: E1031 00:39:27.695166 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:39:27.710976 kubelet[2130]: I1031 00:39:27.710916 2130 policy_none.go:49] "None policy: Start" Oct 31 00:39:27.710976 kubelet[2130]: I1031 00:39:27.710985 2130 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 00:39:27.711187 kubelet[2130]: I1031 00:39:27.711007 2130 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 00:39:27.713699 kubelet[2130]: I1031 00:39:27.713660 2130 policy_none.go:47] "Start" Oct 31 00:39:27.721818 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 00:39:27.744103 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 00:39:27.750156 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 00:39:27.760713 kubelet[2130]: E1031 00:39:27.760662 2130 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:39:27.761571 kubelet[2130]: I1031 00:39:27.761453 2130 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:39:27.761671 kubelet[2130]: I1031 00:39:27.761566 2130 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:39:27.762061 kubelet[2130]: I1031 00:39:27.761979 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:39:27.762835 kubelet[2130]: E1031 00:39:27.762809 2130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:39:27.762893 kubelet[2130]: E1031 00:39:27.762882 2130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:39:27.810057 systemd[1]: Created slice kubepods-burstable-pod0441a46f08c8c4bf5aae5b9dbccf6ee5.slice - libcontainer container kubepods-burstable-pod0441a46f08c8c4bf5aae5b9dbccf6ee5.slice. Oct 31 00:39:27.831527 kubelet[2130]: E1031 00:39:27.831286 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:27.837386 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 31 00:39:27.841711 kubelet[2130]: E1031 00:39:27.841681 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:27.854813 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 31 00:39:27.857366 kubelet[2130]: E1031 00:39:27.857332 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:27.864291 kubelet[2130]: I1031 00:39:27.864226 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0441a46f08c8c4bf5aae5b9dbccf6ee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0441a46f08c8c4bf5aae5b9dbccf6ee5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:27.864291 kubelet[2130]: I1031 00:39:27.864264 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:27.864291 kubelet[2130]: I1031 00:39:27.864285 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:27.864291 kubelet[2130]: I1031 00:39:27.864302 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0441a46f08c8c4bf5aae5b9dbccf6ee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0441a46f08c8c4bf5aae5b9dbccf6ee5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:27.864291 kubelet[2130]: I1031 00:39:27.864324 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:27.864937 kubelet[2130]: I1031 00:39:27.864344 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:27.864937 kubelet[2130]: I1031 00:39:27.864360 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:27.864937 kubelet[2130]: I1031 00:39:27.864377 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:27.864937 kubelet[2130]: I1031 00:39:27.864393 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0441a46f08c8c4bf5aae5b9dbccf6ee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0441a46f08c8c4bf5aae5b9dbccf6ee5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:27.864937 kubelet[2130]: I1031 00:39:27.864837 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:39:27.864937 kubelet[2130]: E1031 00:39:27.864841 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" Oct 31 00:39:27.865138 kubelet[2130]: E1031 00:39:27.865110 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Oct 31 00:39:28.067743 kubelet[2130]: I1031 00:39:28.067702 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:39:28.068325 kubelet[2130]: E1031 00:39:28.068275 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Oct 31 00:39:28.266400 kubelet[2130]: E1031 00:39:28.266346 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" Oct 31 00:39:28.277917 kubelet[2130]: E1031 00:39:28.277863 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:28.279211 containerd[1463]: time="2025-10-31T00:39:28.279138011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0441a46f08c8c4bf5aae5b9dbccf6ee5,Namespace:kube-system,Attempt:0,}" Oct 31 00:39:28.281873 kubelet[2130]: E1031 00:39:28.281748 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:28.282546 containerd[1463]: time="2025-10-31T00:39:28.282487978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 31 00:39:28.284811 kubelet[2130]: E1031 00:39:28.284775 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:28.285233 containerd[1463]: time="2025-10-31T00:39:28.285191337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 31 00:39:28.470279 kubelet[2130]: I1031 00:39:28.470231 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:39:28.471677 kubelet[2130]: E1031 00:39:28.471583 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Oct 31 00:39:28.646893 kubelet[2130]: E1031 00:39:28.646755 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:39:28.920794 kubelet[2130]: E1031 00:39:28.920581 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:39:28.924476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118370921.mount: Deactivated successfully. Oct 31 00:39:28.933098 containerd[1463]: time="2025-10-31T00:39:28.933002632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:39:28.934160 containerd[1463]: time="2025-10-31T00:39:28.934114796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:39:28.935242 containerd[1463]: time="2025-10-31T00:39:28.935090833Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 31 00:39:28.935314 kubelet[2130]: E1031 00:39:28.935191 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:39:28.938321 containerd[1463]: time="2025-10-31T00:39:28.938278003Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:39:28.939143 containerd[1463]: time="2025-10-31T00:39:28.939101272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:39:28.940030 containerd[1463]: time="2025-10-31T00:39:28.939992911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:39:28.941061 containerd[1463]: time="2025-10-31T00:39:28.941020435Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:39:28.945287 containerd[1463]: time="2025-10-31T00:39:28.945231412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:39:28.946161 containerd[1463]: time="2025-10-31T00:39:28.946113023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.840142ms" Oct 31 00:39:28.947475 containerd[1463]: time="2025-10-31T00:39:28.947450720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 668.214093ms" Oct 31 00:39:28.948831 containerd[1463]: time="2025-10-31T00:39:28.948795562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 666.202827ms" Oct 31 00:39:29.006772 kubelet[2130]: E1031 00:39:29.006671 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:39:29.109791 kubelet[2130]: E1031 00:39:29.109729 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="1.6s" Oct 31 00:39:29.245580 containerd[1463]: time="2025-10-31T00:39:29.245448379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:39:29.245580 containerd[1463]: time="2025-10-31T00:39:29.245521878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:39:29.245580 containerd[1463]: time="2025-10-31T00:39:29.245538610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:29.245902 containerd[1463]: time="2025-10-31T00:39:29.245653134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:29.250220 containerd[1463]: time="2025-10-31T00:39:29.250095256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:39:29.250220 containerd[1463]: time="2025-10-31T00:39:29.250180876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:39:29.250220 containerd[1463]: time="2025-10-31T00:39:29.250197167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:29.250437 containerd[1463]: time="2025-10-31T00:39:29.250301594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:29.252175 containerd[1463]: time="2025-10-31T00:39:29.251275898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:39:29.252272 containerd[1463]: time="2025-10-31T00:39:29.252161565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:39:29.252272 containerd[1463]: time="2025-10-31T00:39:29.252234411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:29.252486 containerd[1463]: time="2025-10-31T00:39:29.252356281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:29.274462 kubelet[2130]: I1031 00:39:29.274408 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:39:29.275431 kubelet[2130]: E1031 00:39:29.275335 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Oct 31 00:39:29.309952 systemd[1]: Started cri-containerd-5a41a8a2244aa0aad0cd88d0582fdbd50e947adc1aceb8d869ee9f0f7cf23ba3.scope - libcontainer container 5a41a8a2244aa0aad0cd88d0582fdbd50e947adc1aceb8d869ee9f0f7cf23ba3. Oct 31 00:39:29.319893 systemd[1]: Started cri-containerd-0328c8d899bd4ec2a05ff40b174dcf45c0afdd105bd7a6a9ddbdf2664e7f1990.scope - libcontainer container 0328c8d899bd4ec2a05ff40b174dcf45c0afdd105bd7a6a9ddbdf2664e7f1990. Oct 31 00:39:29.326334 systemd[1]: Started cri-containerd-93e7b8b39142c09d36426502b1eb1f84cf9ef051f7098ac1eeaca160c5bba45d.scope - libcontainer container 93e7b8b39142c09d36426502b1eb1f84cf9ef051f7098ac1eeaca160c5bba45d. Oct 31 00:39:29.380705 containerd[1463]: time="2025-10-31T00:39:29.379602564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a41a8a2244aa0aad0cd88d0582fdbd50e947adc1aceb8d869ee9f0f7cf23ba3\"" Oct 31 00:39:29.384272 kubelet[2130]: E1031 00:39:29.384220 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:29.388393 containerd[1463]: time="2025-10-31T00:39:29.388257784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0328c8d899bd4ec2a05ff40b174dcf45c0afdd105bd7a6a9ddbdf2664e7f1990\"" Oct 31 00:39:29.389991 kubelet[2130]: E1031 00:39:29.389101 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:29.391714 containerd[1463]: time="2025-10-31T00:39:29.391671880Z" level=info msg="CreateContainer within sandbox \"5a41a8a2244aa0aad0cd88d0582fdbd50e947adc1aceb8d869ee9f0f7cf23ba3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:39:29.395728 containerd[1463]: time="2025-10-31T00:39:29.395685494Z" level=info msg="CreateContainer within sandbox \"0328c8d899bd4ec2a05ff40b174dcf45c0afdd105bd7a6a9ddbdf2664e7f1990\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:39:29.400856 containerd[1463]: time="2025-10-31T00:39:29.400754425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0441a46f08c8c4bf5aae5b9dbccf6ee5,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e7b8b39142c09d36426502b1eb1f84cf9ef051f7098ac1eeaca160c5bba45d\"" Oct 31 00:39:29.403293 kubelet[2130]: E1031 00:39:29.403253 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:29.409355 containerd[1463]: time="2025-10-31T00:39:29.409300148Z" level=info msg="CreateContainer within sandbox \"93e7b8b39142c09d36426502b1eb1f84cf9ef051f7098ac1eeaca160c5bba45d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:39:29.417299 containerd[1463]: time="2025-10-31T00:39:29.417244841Z" level=info msg="CreateContainer within sandbox \"5a41a8a2244aa0aad0cd88d0582fdbd50e947adc1aceb8d869ee9f0f7cf23ba3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eb5c8ca4853edf5eca9e8bf482bbf93fa3d2cb6e5725f2919e2581f281093f60\"" Oct 31 00:39:29.417978 containerd[1463]: time="2025-10-31T00:39:29.417955398Z" level=info msg="StartContainer for \"eb5c8ca4853edf5eca9e8bf482bbf93fa3d2cb6e5725f2919e2581f281093f60\"" Oct 31 00:39:29.424425 containerd[1463]: time="2025-10-31T00:39:29.424354572Z" level=info msg="CreateContainer within sandbox \"0328c8d899bd4ec2a05ff40b174dcf45c0afdd105bd7a6a9ddbdf2664e7f1990\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b08966ea4fc219e2303d3e25984b05d5e2e6fec8e2c8055c224d246191bab2bc\"" Oct 31 00:39:29.425352 containerd[1463]: time="2025-10-31T00:39:29.425269605Z" level=info msg="StartContainer for \"b08966ea4fc219e2303d3e25984b05d5e2e6fec8e2c8055c224d246191bab2bc\"" Oct 31 00:39:29.442326 kubelet[2130]: E1031 00:39:29.442269 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 00:39:29.445263 containerd[1463]: time="2025-10-31T00:39:29.445192542Z" level=info msg="CreateContainer within sandbox \"93e7b8b39142c09d36426502b1eb1f84cf9ef051f7098ac1eeaca160c5bba45d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ec3c1b5cf4b41ebcd367a96a8b86aaa1d9e7022e963b0257b9f4b2ecfe13c0d\"" Oct 31 00:39:29.446067 containerd[1463]: time="2025-10-31T00:39:29.445962050Z" level=info msg="StartContainer for \"1ec3c1b5cf4b41ebcd367a96a8b86aaa1d9e7022e963b0257b9f4b2ecfe13c0d\"" Oct 31 00:39:29.452098 systemd[1]: Started cri-containerd-eb5c8ca4853edf5eca9e8bf482bbf93fa3d2cb6e5725f2919e2581f281093f60.scope - libcontainer container eb5c8ca4853edf5eca9e8bf482bbf93fa3d2cb6e5725f2919e2581f281093f60. Oct 31 00:39:29.464813 systemd[1]: Started cri-containerd-b08966ea4fc219e2303d3e25984b05d5e2e6fec8e2c8055c224d246191bab2bc.scope - libcontainer container b08966ea4fc219e2303d3e25984b05d5e2e6fec8e2c8055c224d246191bab2bc. Oct 31 00:39:29.490806 systemd[1]: Started cri-containerd-1ec3c1b5cf4b41ebcd367a96a8b86aaa1d9e7022e963b0257b9f4b2ecfe13c0d.scope - libcontainer container 1ec3c1b5cf4b41ebcd367a96a8b86aaa1d9e7022e963b0257b9f4b2ecfe13c0d. Oct 31 00:39:29.521595 containerd[1463]: time="2025-10-31T00:39:29.520287499Z" level=info msg="StartContainer for \"eb5c8ca4853edf5eca9e8bf482bbf93fa3d2cb6e5725f2919e2581f281093f60\" returns successfully" Oct 31 00:39:29.531634 containerd[1463]: time="2025-10-31T00:39:29.531310095Z" level=info msg="StartContainer for \"b08966ea4fc219e2303d3e25984b05d5e2e6fec8e2c8055c224d246191bab2bc\" returns successfully" Oct 31 00:39:29.558978 containerd[1463]: time="2025-10-31T00:39:29.558917476Z" level=info msg="StartContainer for \"1ec3c1b5cf4b41ebcd367a96a8b86aaa1d9e7022e963b0257b9f4b2ecfe13c0d\" returns successfully" Oct 31 00:39:29.702917 kubelet[2130]: E1031 00:39:29.702871 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:29.703535 kubelet[2130]: E1031 00:39:29.703012 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:29.704698 kubelet[2130]: E1031 00:39:29.704672 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:29.704791 kubelet[2130]: E1031 00:39:29.704768 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:29.707773 kubelet[2130]: E1031 00:39:29.707624 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:29.708061 kubelet[2130]: E1031 00:39:29.707963 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:30.735275 kubelet[2130]: E1031 00:39:30.735225 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:30.736915 kubelet[2130]: E1031 00:39:30.736425 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:30.736915 kubelet[2130]: E1031 00:39:30.736125 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:30.736915 kubelet[2130]: E1031 00:39:30.736590 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:30.736915 kubelet[2130]: E1031 00:39:30.736728 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:39:30.736915 kubelet[2130]: E1031 00:39:30.736840 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:30.877312 kubelet[2130]: I1031 00:39:30.877268 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:39:32.144644 kubelet[2130]: E1031 00:39:32.144564 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 00:39:32.213191 kubelet[2130]: I1031 00:39:32.213125 2130 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:39:32.213191 kubelet[2130]: E1031 00:39:32.213190 2130 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:39:32.264779 kubelet[2130]: I1031 00:39:32.264672 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:32.320721 kubelet[2130]: E1031 00:39:32.320653 2130 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:32.320721 kubelet[2130]: I1031 00:39:32.320696 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:32.322402 kubelet[2130]: E1031 00:39:32.322321 2130 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:32.322402 kubelet[2130]: I1031 00:39:32.322352 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:32.324014 kubelet[2130]: E1031 00:39:32.323976 2130 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:32.623915 kubelet[2130]: I1031 00:39:32.623852 2130 apiserver.go:52] "Watching apiserver" Oct 31 00:39:32.664162 kubelet[2130]: I1031 00:39:32.664076 2130 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 00:39:34.228680 kubelet[2130]: I1031 00:39:34.228632 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:34.322069 kubelet[2130]: E1031 00:39:34.322009 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:34.739806 kubelet[2130]: E1031 00:39:34.739752 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:36.245152 kubelet[2130]: I1031 00:39:36.245106 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:36.249358 kubelet[2130]: E1031 00:39:36.249328 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:36.744926 kubelet[2130]: E1031 00:39:36.744866 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:37.131422 systemd[1]: Reloading requested from client PID 2418 ('systemctl') (unit session-7.scope)... Oct 31 00:39:37.131443 systemd[1]: Reloading... Oct 31 00:39:37.219651 zram_generator::config[2460]: No configuration found. Oct 31 00:39:37.354213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:39:37.454380 systemd[1]: Reloading finished in 322 ms. Oct 31 00:39:37.502087 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:39:37.521306 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:39:37.521664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:39:37.521722 systemd[1]: kubelet.service: Consumed 1.917s CPU time, 128.1M memory peak, 0B memory swap peak. Oct 31 00:39:37.531814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:39:37.715961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:39:37.721878 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:39:37.769582 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:39:37.769582 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:39:37.770081 kubelet[2501]: I1031 00:39:37.769623 2501 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:39:37.777765 kubelet[2501]: I1031 00:39:37.777697 2501 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 00:39:37.777765 kubelet[2501]: I1031 00:39:37.777733 2501 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:39:37.777765 kubelet[2501]: I1031 00:39:37.777770 2501 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 00:39:37.778018 kubelet[2501]: I1031 00:39:37.777786 2501 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:39:37.778055 kubelet[2501]: I1031 00:39:37.778042 2501 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:39:37.779268 kubelet[2501]: I1031 00:39:37.779239 2501 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 00:39:37.781826 kubelet[2501]: I1031 00:39:37.781667 2501 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:39:37.785728 kubelet[2501]: E1031 00:39:37.785683 2501 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:39:37.785842 kubelet[2501]: I1031 00:39:37.785756 2501 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 00:39:37.793706 kubelet[2501]: I1031 00:39:37.793655 2501 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 00:39:37.793970 kubelet[2501]: I1031 00:39:37.793936 2501 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:39:37.794111 kubelet[2501]: I1031 00:39:37.793966 2501 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:39:37.794205 kubelet[2501]: I1031 00:39:37.794116 2501 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:39:37.794205 kubelet[2501]: I1031 00:39:37.794125 2501 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 00:39:37.794205 kubelet[2501]: I1031 00:39:37.794150 2501 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 00:39:37.795908 kubelet[2501]: I1031 00:39:37.795867 2501 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:39:37.796199 kubelet[2501]: I1031 00:39:37.796163 2501 kubelet.go:475] "Attempting to sync node with API server" Oct 31 00:39:37.796235 kubelet[2501]: I1031 00:39:37.796203 2501 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:39:37.796290 kubelet[2501]: I1031 00:39:37.796250 2501 kubelet.go:387] "Adding apiserver pod source" Oct 31 00:39:37.796290 kubelet[2501]: I1031 00:39:37.796283 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:39:37.798008 kubelet[2501]: I1031 00:39:37.797959 2501 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:39:37.798729 kubelet[2501]: I1031 00:39:37.798680 2501 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:39:37.798781 kubelet[2501]: I1031 00:39:37.798737 2501 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.806791 2501 server.go:1262] "Started kubelet" Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.807178 2501 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.807315 2501 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.807377 2501 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.807648 2501 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.808523 2501 server.go:310] "Adding debug handlers to kubelet server" Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.808690 2501 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:39:37.809495 kubelet[2501]: I1031 00:39:37.808527 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:39:37.810075 kubelet[2501]: I1031 00:39:37.809842 2501 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 00:39:37.810075 kubelet[2501]: I1031 00:39:37.809988 2501 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 00:39:37.810247 kubelet[2501]: I1031 00:39:37.810215 2501 reconciler.go:29] "Reconciler: start to sync state" Oct 31 00:39:37.811501 kubelet[2501]: I1031 00:39:37.811443 2501 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:39:37.811636 kubelet[2501]: I1031 00:39:37.811584 2501 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:39:37.814336 kubelet[2501]: E1031 00:39:37.814301 2501 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:39:37.814709 kubelet[2501]: E1031 00:39:37.814684 2501 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:39:37.816140 kubelet[2501]: I1031 00:39:37.816089 2501 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:39:37.827665 kubelet[2501]: I1031 00:39:37.827561 2501 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 00:39:37.841561 kubelet[2501]: I1031 00:39:37.841272 2501 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 00:39:37.841561 kubelet[2501]: I1031 00:39:37.841300 2501 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 00:39:37.841561 kubelet[2501]: I1031 00:39:37.841331 2501 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 00:39:37.841561 kubelet[2501]: E1031 00:39:37.841399 2501 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:39:37.862596 kubelet[2501]: I1031 00:39:37.862561 2501 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:39:37.862596 kubelet[2501]: I1031 00:39:37.862580 2501 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:39:37.862596 kubelet[2501]: I1031 00:39:37.862627 2501 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:39:37.862849 kubelet[2501]: I1031 00:39:37.862765 2501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:39:37.862849 kubelet[2501]: I1031 00:39:37.862776 2501 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:39:37.862849 kubelet[2501]: I1031 00:39:37.862795 2501 policy_none.go:49] "None policy: Start" Oct 31 00:39:37.862849 kubelet[2501]: I1031 00:39:37.862804 2501 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 00:39:37.862849 kubelet[2501]: I1031 00:39:37.862814 2501 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 00:39:37.863007 kubelet[2501]: I1031 00:39:37.862952 2501 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 31 00:39:37.863007 kubelet[2501]: I1031 00:39:37.862965 2501 policy_none.go:47] "Start" Oct 31 00:39:37.869504 kubelet[2501]: E1031 00:39:37.869466 2501 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:39:37.869908 kubelet[2501]: I1031 00:39:37.869719 2501 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:39:37.869908 kubelet[2501]: I1031 00:39:37.869735 2501 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:39:37.870178 kubelet[2501]: I1031 00:39:37.870155 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:39:37.871708 kubelet[2501]: E1031 00:39:37.871652 2501 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:39:37.943139 kubelet[2501]: I1031 00:39:37.942994 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:37.943139 kubelet[2501]: I1031 00:39:37.942994 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:37.943139 kubelet[2501]: I1031 00:39:37.943105 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:37.977228 kubelet[2501]: I1031 00:39:37.976854 2501 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:39:37.980661 kubelet[2501]: E1031 00:39:37.980186 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:37.987048 kubelet[2501]: E1031 00:39:37.986251 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:37.989541 kubelet[2501]: I1031 00:39:37.989462 2501 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:39:37.989770 kubelet[2501]: I1031 00:39:37.989585 2501 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:39:38.112096 kubelet[2501]: I1031 00:39:38.112050 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0441a46f08c8c4bf5aae5b9dbccf6ee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0441a46f08c8c4bf5aae5b9dbccf6ee5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:38.112259 kubelet[2501]: I1031 00:39:38.112180 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0441a46f08c8c4bf5aae5b9dbccf6ee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0441a46f08c8c4bf5aae5b9dbccf6ee5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:38.112259 kubelet[2501]: I1031 00:39:38.112219 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0441a46f08c8c4bf5aae5b9dbccf6ee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0441a46f08c8c4bf5aae5b9dbccf6ee5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:38.112259 kubelet[2501]: I1031 00:39:38.112249 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:38.112331 kubelet[2501]: I1031 00:39:38.112305 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:38.112362 kubelet[2501]: I1031 00:39:38.112335 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:38.112362 kubelet[2501]: I1031 00:39:38.112351 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:38.112417 kubelet[2501]: I1031 00:39:38.112373 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:38.112443 kubelet[2501]: I1031 00:39:38.112402 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:38.281494 kubelet[2501]: E1031 00:39:38.281322 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:38.281494 kubelet[2501]: E1031 00:39:38.281322 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:38.287711 kubelet[2501]: E1031 00:39:38.287487 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:38.798399 kubelet[2501]: I1031 00:39:38.798310 2501 apiserver.go:52] "Watching apiserver" Oct 31 00:39:38.810515 kubelet[2501]: I1031 00:39:38.810445 2501 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 00:39:38.854248 kubelet[2501]: I1031 00:39:38.853658 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:38.854248 kubelet[2501]: I1031 00:39:38.853682 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:38.854248 kubelet[2501]: I1031 00:39:38.853773 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:38.863685 kubelet[2501]: E1031 00:39:38.862956 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:39:38.863685 kubelet[2501]: E1031 00:39:38.862956 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:39:38.863685 kubelet[2501]: E1031 00:39:38.863146 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:38.863685 kubelet[2501]: E1031 00:39:38.863197 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:38.864746 kubelet[2501]: E1031 00:39:38.864727 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:39:38.865409 kubelet[2501]: E1031 00:39:38.864965 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:38.878136 kubelet[2501]: I1031 00:39:38.877654 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.877623351 podStartE2EDuration="1.877623351s" podCreationTimestamp="2025-10-31 00:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:39:38.877464172 +0000 UTC m=+1.151447331" watchObservedRunningTime="2025-10-31 00:39:38.877623351 +0000 UTC m=+1.151606500" Oct 31 00:39:38.894901 kubelet[2501]: I1031 00:39:38.894797 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.8947712 podStartE2EDuration="4.8947712s" podCreationTimestamp="2025-10-31 00:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:39:38.893989802 +0000 UTC m=+1.167972961" watchObservedRunningTime="2025-10-31 00:39:38.8947712 +0000 UTC m=+1.168754359" Oct 31 00:39:38.894901 kubelet[2501]: I1031 00:39:38.894899 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.894892729 podStartE2EDuration="2.894892729s" podCreationTimestamp="2025-10-31 00:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:39:38.885707465 +0000 UTC m=+1.159690614" watchObservedRunningTime="2025-10-31 00:39:38.894892729 +0000 UTC m=+1.168875878" Oct 31 00:39:39.544838 update_engine[1450]: I20251031 00:39:39.544688 1450 update_attempter.cc:509] Updating boot flags... Oct 31 00:39:39.855902 kubelet[2501]: E1031 00:39:39.855158 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:39.855902 kubelet[2501]: E1031 00:39:39.855303 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:39.855902 kubelet[2501]: E1031 00:39:39.855310 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:40.271579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2562) Oct 31 00:39:40.321652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2564) Oct 31 00:39:40.365691 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2564) Oct 31 00:39:41.086531 kubelet[2501]: I1031 00:39:41.086487 2501 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:39:41.087168 containerd[1463]: time="2025-10-31T00:39:41.086915364Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:39:41.087644 kubelet[2501]: I1031 00:39:41.087204 2501 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:39:41.586527 kubelet[2501]: E1031 00:39:41.586461 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:41.859385 kubelet[2501]: E1031 00:39:41.859221 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:42.134804 systemd[1]: Created slice kubepods-besteffort-pod2380adde_36b7_49f2_9bfc_86ed1ab6b188.slice - libcontainer container kubepods-besteffort-pod2380adde_36b7_49f2_9bfc_86ed1ab6b188.slice. Oct 31 00:39:42.141369 kubelet[2501]: I1031 00:39:42.140374 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2380adde-36b7-49f2-9bfc-86ed1ab6b188-lib-modules\") pod \"kube-proxy-xvhlh\" (UID: \"2380adde-36b7-49f2-9bfc-86ed1ab6b188\") " pod="kube-system/kube-proxy-xvhlh" Oct 31 00:39:42.141369 kubelet[2501]: I1031 00:39:42.141147 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2380adde-36b7-49f2-9bfc-86ed1ab6b188-kube-proxy\") pod \"kube-proxy-xvhlh\" (UID: \"2380adde-36b7-49f2-9bfc-86ed1ab6b188\") " pod="kube-system/kube-proxy-xvhlh" Oct 31 00:39:42.141369 kubelet[2501]: I1031 00:39:42.141165 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2380adde-36b7-49f2-9bfc-86ed1ab6b188-xtables-lock\") pod \"kube-proxy-xvhlh\" (UID: \"2380adde-36b7-49f2-9bfc-86ed1ab6b188\") " pod="kube-system/kube-proxy-xvhlh" Oct 31 00:39:42.141369 kubelet[2501]: I1031 00:39:42.141180 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c66km\" (UniqueName: \"kubernetes.io/projected/2380adde-36b7-49f2-9bfc-86ed1ab6b188-kube-api-access-c66km\") pod \"kube-proxy-xvhlh\" (UID: \"2380adde-36b7-49f2-9bfc-86ed1ab6b188\") " pod="kube-system/kube-proxy-xvhlh" Oct 31 00:39:42.236544 systemd[1]: Created slice kubepods-besteffort-pod5247396a_3637_420b_ba1a_4f2fb50ebdd0.slice - libcontainer container kubepods-besteffort-pod5247396a_3637_420b_ba1a_4f2fb50ebdd0.slice. Oct 31 00:39:42.241906 kubelet[2501]: I1031 00:39:42.241819 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5247396a-3637-420b-ba1a-4f2fb50ebdd0-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-whcgq\" (UID: \"5247396a-3637-420b-ba1a-4f2fb50ebdd0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-whcgq" Oct 31 00:39:42.241906 kubelet[2501]: I1031 00:39:42.241868 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27894\" (UniqueName: \"kubernetes.io/projected/5247396a-3637-420b-ba1a-4f2fb50ebdd0-kube-api-access-27894\") pod \"tigera-operator-65cdcdfd6d-whcgq\" (UID: \"5247396a-3637-420b-ba1a-4f2fb50ebdd0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-whcgq" Oct 31 00:39:42.449297 kubelet[2501]: E1031 00:39:42.449132 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:42.450109 containerd[1463]: time="2025-10-31T00:39:42.450063346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvhlh,Uid:2380adde-36b7-49f2-9bfc-86ed1ab6b188,Namespace:kube-system,Attempt:0,}" Oct 31 00:39:42.482683 containerd[1463]: time="2025-10-31T00:39:42.482307135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:39:42.482683 containerd[1463]: time="2025-10-31T00:39:42.482393998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:39:42.482683 containerd[1463]: time="2025-10-31T00:39:42.482423423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:42.482683 containerd[1463]: time="2025-10-31T00:39:42.482537006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:42.513777 systemd[1]: Started cri-containerd-01238636647ec9975ec5e735462b89d8c01bddeae71a9527e096aa6bbdc89f9d.scope - libcontainer container 01238636647ec9975ec5e735462b89d8c01bddeae71a9527e096aa6bbdc89f9d. Oct 31 00:39:42.538530 containerd[1463]: time="2025-10-31T00:39:42.538444583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvhlh,Uid:2380adde-36b7-49f2-9bfc-86ed1ab6b188,Namespace:kube-system,Attempt:0,} returns sandbox id \"01238636647ec9975ec5e735462b89d8c01bddeae71a9527e096aa6bbdc89f9d\"" Oct 31 00:39:42.539540 kubelet[2501]: E1031 00:39:42.539493 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:42.545297 containerd[1463]: time="2025-10-31T00:39:42.545239830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-whcgq,Uid:5247396a-3637-420b-ba1a-4f2fb50ebdd0,Namespace:tigera-operator,Attempt:0,}" Oct 31 00:39:42.547297 containerd[1463]: time="2025-10-31T00:39:42.547248263Z" level=info msg="CreateContainer within sandbox \"01238636647ec9975ec5e735462b89d8c01bddeae71a9527e096aa6bbdc89f9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:39:42.576288 containerd[1463]: time="2025-10-31T00:39:42.576143670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:39:42.576288 containerd[1463]: time="2025-10-31T00:39:42.576209033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:39:42.576288 containerd[1463]: time="2025-10-31T00:39:42.576223892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:42.576496 containerd[1463]: time="2025-10-31T00:39:42.576373001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:42.576826 containerd[1463]: time="2025-10-31T00:39:42.576788341Z" level=info msg="CreateContainer within sandbox \"01238636647ec9975ec5e735462b89d8c01bddeae71a9527e096aa6bbdc89f9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"612084f8ec16d922d5807bffc4866d54214d9f3fa4facd3b26d083a7bb0a967b\"" Oct 31 00:39:42.577949 containerd[1463]: time="2025-10-31T00:39:42.577907965Z" level=info msg="StartContainer for \"612084f8ec16d922d5807bffc4866d54214d9f3fa4facd3b26d083a7bb0a967b\"" Oct 31 00:39:42.603803 systemd[1]: Started cri-containerd-9b18c50220a4f6fc325425f4bd53da818306d6e330d2325f35a1cbb146d6846a.scope - libcontainer container 9b18c50220a4f6fc325425f4bd53da818306d6e330d2325f35a1cbb146d6846a. Oct 31 00:39:42.611804 systemd[1]: Started cri-containerd-612084f8ec16d922d5807bffc4866d54214d9f3fa4facd3b26d083a7bb0a967b.scope - libcontainer container 612084f8ec16d922d5807bffc4866d54214d9f3fa4facd3b26d083a7bb0a967b. Oct 31 00:39:42.646861 containerd[1463]: time="2025-10-31T00:39:42.646791150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-whcgq,Uid:5247396a-3637-420b-ba1a-4f2fb50ebdd0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9b18c50220a4f6fc325425f4bd53da818306d6e330d2325f35a1cbb146d6846a\"" Oct 31 00:39:42.649394 containerd[1463]: time="2025-10-31T00:39:42.649344928Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 00:39:42.857234 containerd[1463]: time="2025-10-31T00:39:42.857169270Z" level=info msg="StartContainer for \"612084f8ec16d922d5807bffc4866d54214d9f3fa4facd3b26d083a7bb0a967b\" returns successfully" Oct 31 00:39:42.889754 kubelet[2501]: E1031 00:39:42.889721 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:43.080680 kubelet[2501]: I1031 00:39:43.080575 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xvhlh" podStartSLOduration=1.080553131 podStartE2EDuration="1.080553131s" podCreationTimestamp="2025-10-31 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:39:43.048330197 +0000 UTC m=+5.322313347" watchObservedRunningTime="2025-10-31 00:39:43.080553131 +0000 UTC m=+5.354536280" Oct 31 00:39:44.845351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375741500.mount: Deactivated successfully. Oct 31 00:39:45.670948 kubelet[2501]: E1031 00:39:45.670901 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:45.895327 kubelet[2501]: E1031 00:39:45.895275 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:46.698721 containerd[1463]: time="2025-10-31T00:39:46.698654904Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:46.760766 containerd[1463]: time="2025-10-31T00:39:46.760690748Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 00:39:46.768513 containerd[1463]: time="2025-10-31T00:39:46.768457656Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:46.775323 containerd[1463]: time="2025-10-31T00:39:46.775288106Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:39:46.776200 containerd[1463]: time="2025-10-31T00:39:46.776151397Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.126769238s" Oct 31 00:39:46.776253 containerd[1463]: time="2025-10-31T00:39:46.776201280Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 00:39:46.785955 containerd[1463]: time="2025-10-31T00:39:46.785913352Z" level=info msg="CreateContainer within sandbox \"9b18c50220a4f6fc325425f4bd53da818306d6e330d2325f35a1cbb146d6846a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 00:39:46.804703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount960140849.mount: Deactivated successfully. Oct 31 00:39:46.806459 containerd[1463]: time="2025-10-31T00:39:46.806428633Z" level=info msg="CreateContainer within sandbox \"9b18c50220a4f6fc325425f4bd53da818306d6e330d2325f35a1cbb146d6846a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2ae5e6004187eb6eb41ed1fbdf0acef1e3384b57749db97ae47b7105e52f4161\"" Oct 31 00:39:46.807000 containerd[1463]: time="2025-10-31T00:39:46.806975099Z" level=info msg="StartContainer for \"2ae5e6004187eb6eb41ed1fbdf0acef1e3384b57749db97ae47b7105e52f4161\"" Oct 31 00:39:46.838791 systemd[1]: Started cri-containerd-2ae5e6004187eb6eb41ed1fbdf0acef1e3384b57749db97ae47b7105e52f4161.scope - libcontainer container 2ae5e6004187eb6eb41ed1fbdf0acef1e3384b57749db97ae47b7105e52f4161. Oct 31 00:39:46.867098 containerd[1463]: time="2025-10-31T00:39:46.867050832Z" level=info msg="StartContainer for \"2ae5e6004187eb6eb41ed1fbdf0acef1e3384b57749db97ae47b7105e52f4161\" returns successfully" Oct 31 00:39:46.929010 kubelet[2501]: I1031 00:39:46.928946 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-whcgq" podStartSLOduration=0.798275916 podStartE2EDuration="4.926942058s" podCreationTimestamp="2025-10-31 00:39:42 +0000 UTC" firstStartedPulling="2025-10-31 00:39:42.648299223 +0000 UTC m=+4.922282372" lastFinishedPulling="2025-10-31 00:39:46.776965365 +0000 UTC m=+9.050948514" observedRunningTime="2025-10-31 00:39:46.926237605 +0000 UTC m=+9.200220764" watchObservedRunningTime="2025-10-31 00:39:46.926942058 +0000 UTC m=+9.200925207" Oct 31 00:39:47.131837 kubelet[2501]: E1031 00:39:47.131800 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:47.907329 kubelet[2501]: E1031 00:39:47.907110 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:52.303477 sudo[1640]: pam_unix(sudo:session): session closed for user root Oct 31 00:39:52.312685 sshd[1637]: pam_unix(sshd:session): session closed for user core Oct 31 00:39:52.319673 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:55622.service: Deactivated successfully. Oct 31 00:39:52.328165 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:39:52.328538 systemd[1]: session-7.scope: Consumed 6.581s CPU time, 158.8M memory peak, 0B memory swap peak. Oct 31 00:39:52.329773 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:39:52.331632 systemd-logind[1448]: Removed session 7. Oct 31 00:39:56.829854 systemd[1]: Created slice kubepods-besteffort-pod7c2124d4_ad2a_4e81_b71c_d14361070a18.slice - libcontainer container kubepods-besteffort-pod7c2124d4_ad2a_4e81_b71c_d14361070a18.slice. Oct 31 00:39:56.847287 kubelet[2501]: I1031 00:39:56.847217 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7c2124d4-ad2a-4e81-b71c-d14361070a18-typha-certs\") pod \"calico-typha-779cbb8b5d-rrzq8\" (UID: \"7c2124d4-ad2a-4e81-b71c-d14361070a18\") " pod="calico-system/calico-typha-779cbb8b5d-rrzq8" Oct 31 00:39:56.847287 kubelet[2501]: I1031 00:39:56.847289 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2124d4-ad2a-4e81-b71c-d14361070a18-tigera-ca-bundle\") pod \"calico-typha-779cbb8b5d-rrzq8\" (UID: \"7c2124d4-ad2a-4e81-b71c-d14361070a18\") " pod="calico-system/calico-typha-779cbb8b5d-rrzq8" Oct 31 00:39:56.847900 kubelet[2501]: I1031 00:39:56.847319 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6mz\" (UniqueName: \"kubernetes.io/projected/7c2124d4-ad2a-4e81-b71c-d14361070a18-kube-api-access-ht6mz\") pod \"calico-typha-779cbb8b5d-rrzq8\" (UID: \"7c2124d4-ad2a-4e81-b71c-d14361070a18\") " pod="calico-system/calico-typha-779cbb8b5d-rrzq8" Oct 31 00:39:56.906454 systemd[1]: Created slice kubepods-besteffort-podc4af052d_e6c5_4a1b_b5e6_f9e250da8e74.slice - libcontainer container kubepods-besteffort-podc4af052d_e6c5_4a1b_b5e6_f9e250da8e74.slice. Oct 31 00:39:56.948744 kubelet[2501]: I1031 00:39:56.948113 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-flexvol-driver-host\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.948744 kubelet[2501]: I1031 00:39:56.948191 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-lib-modules\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.948744 kubelet[2501]: I1031 00:39:56.948228 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-tigera-ca-bundle\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.948744 kubelet[2501]: I1031 00:39:56.948250 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-cni-log-dir\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.948744 kubelet[2501]: I1031 00:39:56.948269 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-var-lib-calico\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.949123 kubelet[2501]: I1031 00:39:56.948290 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-xtables-lock\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.949123 kubelet[2501]: I1031 00:39:56.948311 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-node-certs\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.949123 kubelet[2501]: I1031 00:39:56.948329 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp96z\" (UniqueName: \"kubernetes.io/projected/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-kube-api-access-qp96z\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.949123 kubelet[2501]: I1031 00:39:56.948344 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-cni-bin-dir\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.949123 kubelet[2501]: I1031 00:39:56.948361 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-cni-net-dir\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.949304 kubelet[2501]: I1031 00:39:56.948379 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-policysync\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:56.949304 kubelet[2501]: I1031 00:39:56.948423 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4af052d-e6c5-4a1b-b5e6-f9e250da8e74-var-run-calico\") pod \"calico-node-7hvws\" (UID: \"c4af052d-e6c5-4a1b-b5e6-f9e250da8e74\") " pod="calico-system/calico-node-7hvws" Oct 31 00:39:57.004980 kubelet[2501]: E1031 00:39:57.004708 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:39:57.049378 kubelet[2501]: I1031 00:39:57.049292 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b8404757-a167-4c06-a272-e0eda36ae575-registration-dir\") pod \"csi-node-driver-6gj62\" (UID: \"b8404757-a167-4c06-a272-e0eda36ae575\") " pod="calico-system/csi-node-driver-6gj62" Oct 31 00:39:57.049378 kubelet[2501]: I1031 00:39:57.049355 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b8404757-a167-4c06-a272-e0eda36ae575-socket-dir\") pod \"csi-node-driver-6gj62\" (UID: \"b8404757-a167-4c06-a272-e0eda36ae575\") " pod="calico-system/csi-node-driver-6gj62" Oct 31 00:39:57.049600 kubelet[2501]: I1031 00:39:57.049514 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8404757-a167-4c06-a272-e0eda36ae575-kubelet-dir\") pod \"csi-node-driver-6gj62\" (UID: \"b8404757-a167-4c06-a272-e0eda36ae575\") " pod="calico-system/csi-node-driver-6gj62" Oct 31 00:39:57.049600 kubelet[2501]: I1031 00:39:57.049560 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbrw\" (UniqueName: \"kubernetes.io/projected/b8404757-a167-4c06-a272-e0eda36ae575-kube-api-access-9gbrw\") pod \"csi-node-driver-6gj62\" (UID: \"b8404757-a167-4c06-a272-e0eda36ae575\") " pod="calico-system/csi-node-driver-6gj62" Oct 31 00:39:57.049670 kubelet[2501]: I1031 00:39:57.049634 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b8404757-a167-4c06-a272-e0eda36ae575-varrun\") pod \"csi-node-driver-6gj62\" (UID: \"b8404757-a167-4c06-a272-e0eda36ae575\") " pod="calico-system/csi-node-driver-6gj62" Oct 31 00:39:57.051895 kubelet[2501]: E1031 00:39:57.051844 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.051895 kubelet[2501]: W1031 00:39:57.051877 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.052034 kubelet[2501]: E1031 00:39:57.051916 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.058242 kubelet[2501]: E1031 00:39:57.058194 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.058242 kubelet[2501]: W1031 00:39:57.058221 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.058242 kubelet[2501]: E1031 00:39:57.058250 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.063691 kubelet[2501]: E1031 00:39:57.063643 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.063691 kubelet[2501]: W1031 00:39:57.063674 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.063852 kubelet[2501]: E1031 00:39:57.063705 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.151113 kubelet[2501]: E1031 00:39:57.150949 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.151113 kubelet[2501]: W1031 00:39:57.150982 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.151113 kubelet[2501]: E1031 00:39:57.151007 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.151376 kubelet[2501]: E1031 00:39:57.151354 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.151376 kubelet[2501]: W1031 00:39:57.151370 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.151454 kubelet[2501]: E1031 00:39:57.151382 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.151789 kubelet[2501]: E1031 00:39:57.151770 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.151789 kubelet[2501]: W1031 00:39:57.151786 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.151919 kubelet[2501]: E1031 00:39:57.151798 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.152160 kubelet[2501]: E1031 00:39:57.152127 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.152160 kubelet[2501]: W1031 00:39:57.152145 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.152160 kubelet[2501]: E1031 00:39:57.152160 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.152694 kubelet[2501]: E1031 00:39:57.152675 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.152694 kubelet[2501]: W1031 00:39:57.152691 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.152778 kubelet[2501]: E1031 00:39:57.152704 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.153015 kubelet[2501]: E1031 00:39:57.152994 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.153015 kubelet[2501]: W1031 00:39:57.153013 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.153084 kubelet[2501]: E1031 00:39:57.153027 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.153275 kubelet[2501]: E1031 00:39:57.153250 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.153275 kubelet[2501]: W1031 00:39:57.153262 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.153275 kubelet[2501]: E1031 00:39:57.153272 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.153513 kubelet[2501]: E1031 00:39:57.153498 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.153513 kubelet[2501]: W1031 00:39:57.153509 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.153583 kubelet[2501]: E1031 00:39:57.153518 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.153760 kubelet[2501]: E1031 00:39:57.153744 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.153760 kubelet[2501]: W1031 00:39:57.153757 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.153828 kubelet[2501]: E1031 00:39:57.153767 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.154019 kubelet[2501]: E1031 00:39:57.154002 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.154019 kubelet[2501]: W1031 00:39:57.154016 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.154087 kubelet[2501]: E1031 00:39:57.154026 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.154263 kubelet[2501]: E1031 00:39:57.154242 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.154263 kubelet[2501]: W1031 00:39:57.154255 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.154263 kubelet[2501]: E1031 00:39:57.154267 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.154516 kubelet[2501]: E1031 00:39:57.154500 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.154516 kubelet[2501]: W1031 00:39:57.154513 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.154572 kubelet[2501]: E1031 00:39:57.154524 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.154860 kubelet[2501]: E1031 00:39:57.154842 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.154860 kubelet[2501]: W1031 00:39:57.154856 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.154860 kubelet[2501]: E1031 00:39:57.154868 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.155161 kubelet[2501]: E1031 00:39:57.155132 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.155161 kubelet[2501]: W1031 00:39:57.155148 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.155161 kubelet[2501]: E1031 00:39:57.155159 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.155394 kubelet[2501]: E1031 00:39:57.155376 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.155394 kubelet[2501]: W1031 00:39:57.155390 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.155440 kubelet[2501]: E1031 00:39:57.155401 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.155643 kubelet[2501]: E1031 00:39:57.155623 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.155643 kubelet[2501]: W1031 00:39:57.155637 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.155699 kubelet[2501]: E1031 00:39:57.155648 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.155872 kubelet[2501]: E1031 00:39:57.155853 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.155872 kubelet[2501]: W1031 00:39:57.155867 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.155936 kubelet[2501]: E1031 00:39:57.155878 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.156113 kubelet[2501]: E1031 00:39:57.156094 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.156113 kubelet[2501]: W1031 00:39:57.156108 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.156163 kubelet[2501]: E1031 00:39:57.156118 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.156405 kubelet[2501]: E1031 00:39:57.156386 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.156405 kubelet[2501]: W1031 00:39:57.156399 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.156453 kubelet[2501]: E1031 00:39:57.156411 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.156758 kubelet[2501]: E1031 00:39:57.156732 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.156758 kubelet[2501]: W1031 00:39:57.156749 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.156758 kubelet[2501]: E1031 00:39:57.156760 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.157092 kubelet[2501]: E1031 00:39:57.157057 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.157141 kubelet[2501]: W1031 00:39:57.157089 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.157141 kubelet[2501]: E1031 00:39:57.157121 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.157418 kubelet[2501]: E1031 00:39:57.157398 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.157418 kubelet[2501]: W1031 00:39:57.157414 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.157542 kubelet[2501]: E1031 00:39:57.157423 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.157808 kubelet[2501]: E1031 00:39:57.157750 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.157808 kubelet[2501]: W1031 00:39:57.157772 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.157808 kubelet[2501]: E1031 00:39:57.157784 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.158141 kubelet[2501]: E1031 00:39:57.158111 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.158141 kubelet[2501]: W1031 00:39:57.158139 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.158220 kubelet[2501]: E1031 00:39:57.158153 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.226578 kubelet[2501]: E1031 00:39:57.226523 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.226578 kubelet[2501]: W1031 00:39:57.226556 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.226578 kubelet[2501]: E1031 00:39:57.226582 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.257042 kubelet[2501]: E1031 00:39:57.256974 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:57.267156 containerd[1463]: time="2025-10-31T00:39:57.267086523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-779cbb8b5d-rrzq8,Uid:7c2124d4-ad2a-4e81-b71c-d14361070a18,Namespace:calico-system,Attempt:0,}" Oct 31 00:39:57.299850 kubelet[2501]: E1031 00:39:57.299809 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:39:57.299850 kubelet[2501]: W1031 00:39:57.299841 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:39:57.300031 kubelet[2501]: E1031 00:39:57.299886 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:39:57.326389 kubelet[2501]: E1031 00:39:57.326309 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:57.327484 containerd[1463]: time="2025-10-31T00:39:57.327236665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hvws,Uid:c4af052d-e6c5-4a1b-b5e6-f9e250da8e74,Namespace:calico-system,Attempt:0,}" Oct 31 00:39:57.542665 containerd[1463]: time="2025-10-31T00:39:57.542506634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:39:57.542665 containerd[1463]: time="2025-10-31T00:39:57.542587436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:39:57.542665 containerd[1463]: time="2025-10-31T00:39:57.542601132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:57.543332 containerd[1463]: time="2025-10-31T00:39:57.543186190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:57.565858 systemd[1]: Started cri-containerd-5e65cbbadebd837f04eeff9bb9c6b4ff9ab4a5a3b7388e7f7498b759699ab849.scope - libcontainer container 5e65cbbadebd837f04eeff9bb9c6b4ff9ab4a5a3b7388e7f7498b759699ab849. Oct 31 00:39:57.613328 containerd[1463]: time="2025-10-31T00:39:57.613266693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-779cbb8b5d-rrzq8,Uid:7c2124d4-ad2a-4e81-b71c-d14361070a18,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e65cbbadebd837f04eeff9bb9c6b4ff9ab4a5a3b7388e7f7498b759699ab849\"" Oct 31 00:39:57.614114 kubelet[2501]: E1031 00:39:57.614082 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:57.614838 containerd[1463]: time="2025-10-31T00:39:57.614808507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 00:39:57.661585 containerd[1463]: time="2025-10-31T00:39:57.661411790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:39:57.661585 containerd[1463]: time="2025-10-31T00:39:57.661496248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:39:57.661585 containerd[1463]: time="2025-10-31T00:39:57.661507970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:57.661896 containerd[1463]: time="2025-10-31T00:39:57.661628376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:39:57.695911 systemd[1]: Started cri-containerd-c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c.scope - libcontainer container c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c. Oct 31 00:39:57.729216 containerd[1463]: time="2025-10-31T00:39:57.728292556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hvws,Uid:c4af052d-e6c5-4a1b-b5e6-f9e250da8e74,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c\"" Oct 31 00:39:57.729564 kubelet[2501]: E1031 00:39:57.729535 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:39:58.842852 kubelet[2501]: E1031 00:39:58.842750 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:00.397235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728226766.mount: Deactivated successfully. Oct 31 00:40:00.842213 kubelet[2501]: E1031 00:40:00.842160 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:02.598537 containerd[1463]: time="2025-10-31T00:40:02.598459523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:02.599253 containerd[1463]: time="2025-10-31T00:40:02.599201596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 00:40:02.601028 containerd[1463]: time="2025-10-31T00:40:02.600941511Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:02.605356 containerd[1463]: time="2025-10-31T00:40:02.605297527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:02.606011 containerd[1463]: time="2025-10-31T00:40:02.605940523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.991012703s" Oct 31 00:40:02.606011 containerd[1463]: time="2025-10-31T00:40:02.605979887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 00:40:02.607358 containerd[1463]: time="2025-10-31T00:40:02.607284726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 00:40:02.623028 containerd[1463]: time="2025-10-31T00:40:02.622966981Z" level=info msg="CreateContainer within sandbox \"5e65cbbadebd837f04eeff9bb9c6b4ff9ab4a5a3b7388e7f7498b759699ab849\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 00:40:02.639093 containerd[1463]: time="2025-10-31T00:40:02.639039257Z" level=info msg="CreateContainer within sandbox \"5e65cbbadebd837f04eeff9bb9c6b4ff9ab4a5a3b7388e7f7498b759699ab849\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"175f52870e8ef55bc5fab0d305ee83c82b0ba3ea5ee627f163017bda5265c9db\"" Oct 31 00:40:02.639641 containerd[1463]: time="2025-10-31T00:40:02.639563581Z" level=info msg="StartContainer for \"175f52870e8ef55bc5fab0d305ee83c82b0ba3ea5ee627f163017bda5265c9db\"" Oct 31 00:40:02.685928 systemd[1]: Started cri-containerd-175f52870e8ef55bc5fab0d305ee83c82b0ba3ea5ee627f163017bda5265c9db.scope - libcontainer container 175f52870e8ef55bc5fab0d305ee83c82b0ba3ea5ee627f163017bda5265c9db. Oct 31 00:40:02.749380 containerd[1463]: time="2025-10-31T00:40:02.749319792Z" level=info msg="StartContainer for \"175f52870e8ef55bc5fab0d305ee83c82b0ba3ea5ee627f163017bda5265c9db\" returns successfully" Oct 31 00:40:02.842631 kubelet[2501]: E1031 00:40:02.842556 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:02.943745 kubelet[2501]: E1031 00:40:02.943585 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:02.976746 kubelet[2501]: E1031 00:40:02.976679 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.976746 kubelet[2501]: W1031 00:40:02.976710 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.976746 kubelet[2501]: E1031 00:40:02.976746 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.977660 kubelet[2501]: E1031 00:40:02.977106 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.977660 kubelet[2501]: W1031 00:40:02.977144 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.977660 kubelet[2501]: E1031 00:40:02.977183 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.977660 kubelet[2501]: E1031 00:40:02.977461 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.977660 kubelet[2501]: W1031 00:40:02.977469 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.977660 kubelet[2501]: E1031 00:40:02.977480 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.977903 kubelet[2501]: E1031 00:40:02.977827 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.977903 kubelet[2501]: W1031 00:40:02.977837 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.977903 kubelet[2501]: E1031 00:40:02.977847 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.978189 kubelet[2501]: E1031 00:40:02.978159 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.978189 kubelet[2501]: W1031 00:40:02.978175 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.978189 kubelet[2501]: E1031 00:40:02.978188 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.978485 kubelet[2501]: E1031 00:40:02.978462 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.978485 kubelet[2501]: W1031 00:40:02.978476 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.978485 kubelet[2501]: E1031 00:40:02.978485 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.978741 kubelet[2501]: E1031 00:40:02.978710 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.978741 kubelet[2501]: W1031 00:40:02.978721 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.978741 kubelet[2501]: E1031 00:40:02.978742 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.978988 kubelet[2501]: E1031 00:40:02.978966 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.978988 kubelet[2501]: W1031 00:40:02.978978 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.978988 kubelet[2501]: E1031 00:40:02.978986 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.979228 kubelet[2501]: E1031 00:40:02.979207 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.979228 kubelet[2501]: W1031 00:40:02.979218 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.979228 kubelet[2501]: E1031 00:40:02.979226 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.979474 kubelet[2501]: E1031 00:40:02.979453 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.979474 kubelet[2501]: W1031 00:40:02.979464 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.979474 kubelet[2501]: E1031 00:40:02.979473 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.979718 kubelet[2501]: E1031 00:40:02.979685 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.979718 kubelet[2501]: W1031 00:40:02.979709 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.979718 kubelet[2501]: E1031 00:40:02.979717 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.979967 kubelet[2501]: E1031 00:40:02.979945 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.979967 kubelet[2501]: W1031 00:40:02.979956 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.979967 kubelet[2501]: E1031 00:40:02.979965 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.980203 kubelet[2501]: E1031 00:40:02.980180 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.980203 kubelet[2501]: W1031 00:40:02.980191 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.980203 kubelet[2501]: E1031 00:40:02.980199 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.980414 kubelet[2501]: E1031 00:40:02.980396 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.980414 kubelet[2501]: W1031 00:40:02.980406 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.980414 kubelet[2501]: E1031 00:40:02.980414 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.980649 kubelet[2501]: E1031 00:40:02.980629 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.980649 kubelet[2501]: W1031 00:40:02.980640 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.980649 kubelet[2501]: E1031 00:40:02.980648 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.994321 kubelet[2501]: E1031 00:40:02.994289 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.994376 kubelet[2501]: W1031 00:40:02.994324 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.994376 kubelet[2501]: E1031 00:40:02.994346 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.994664 kubelet[2501]: E1031 00:40:02.994648 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.994664 kubelet[2501]: W1031 00:40:02.994659 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.994740 kubelet[2501]: E1031 00:40:02.994668 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.994931 kubelet[2501]: E1031 00:40:02.994911 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.994931 kubelet[2501]: W1031 00:40:02.994923 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.994982 kubelet[2501]: E1031 00:40:02.994931 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.995300 kubelet[2501]: E1031 00:40:02.995263 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.995300 kubelet[2501]: W1031 00:40:02.995294 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.995350 kubelet[2501]: E1031 00:40:02.995316 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.995587 kubelet[2501]: E1031 00:40:02.995568 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.995587 kubelet[2501]: W1031 00:40:02.995583 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.995681 kubelet[2501]: E1031 00:40:02.995594 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.995861 kubelet[2501]: E1031 00:40:02.995836 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.995861 kubelet[2501]: W1031 00:40:02.995850 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.995911 kubelet[2501]: E1031 00:40:02.995859 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.996119 kubelet[2501]: E1031 00:40:02.996101 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.996119 kubelet[2501]: W1031 00:40:02.996115 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.996180 kubelet[2501]: E1031 00:40:02.996130 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.996389 kubelet[2501]: E1031 00:40:02.996373 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.996389 kubelet[2501]: W1031 00:40:02.996385 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.996444 kubelet[2501]: E1031 00:40:02.996395 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.996706 kubelet[2501]: E1031 00:40:02.996690 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.996706 kubelet[2501]: W1031 00:40:02.996703 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.996773 kubelet[2501]: E1031 00:40:02.996712 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.996972 kubelet[2501]: E1031 00:40:02.996955 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.996972 kubelet[2501]: W1031 00:40:02.996966 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.997032 kubelet[2501]: E1031 00:40:02.996975 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.997217 kubelet[2501]: E1031 00:40:02.997200 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.997217 kubelet[2501]: W1031 00:40:02.997213 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.997275 kubelet[2501]: E1031 00:40:02.997224 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.997501 kubelet[2501]: E1031 00:40:02.997478 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.997501 kubelet[2501]: W1031 00:40:02.997491 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.997501 kubelet[2501]: E1031 00:40:02.997500 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.997878 kubelet[2501]: E1031 00:40:02.997855 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.997878 kubelet[2501]: W1031 00:40:02.997872 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.997936 kubelet[2501]: E1031 00:40:02.997884 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.998146 kubelet[2501]: E1031 00:40:02.998127 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.998146 kubelet[2501]: W1031 00:40:02.998138 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.998199 kubelet[2501]: E1031 00:40:02.998149 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.998366 kubelet[2501]: E1031 00:40:02.998354 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.998366 kubelet[2501]: W1031 00:40:02.998364 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.998418 kubelet[2501]: E1031 00:40:02.998372 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.998691 kubelet[2501]: E1031 00:40:02.998676 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.998691 kubelet[2501]: W1031 00:40:02.998689 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.998763 kubelet[2501]: E1031 00:40:02.998700 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.999046 kubelet[2501]: E1031 00:40:02.999025 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.999046 kubelet[2501]: W1031 00:40:02.999043 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.999268 kubelet[2501]: E1031 00:40:02.999058 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:02.999374 kubelet[2501]: E1031 00:40:02.999346 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:02.999374 kubelet[2501]: W1031 00:40:02.999359 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:02.999374 kubelet[2501]: E1031 00:40:02.999368 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.040550 kubelet[2501]: I1031 00:40:03.040476 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-779cbb8b5d-rrzq8" podStartSLOduration=2.047919835 podStartE2EDuration="7.040459093s" podCreationTimestamp="2025-10-31 00:39:56 +0000 UTC" firstStartedPulling="2025-10-31 00:39:57.614499687 +0000 UTC m=+19.888482836" lastFinishedPulling="2025-10-31 00:40:02.607038945 +0000 UTC m=+24.881022094" observedRunningTime="2025-10-31 00:40:03.040056457 +0000 UTC m=+25.314039606" watchObservedRunningTime="2025-10-31 00:40:03.040459093 +0000 UTC m=+25.314442232" Oct 31 00:40:03.945150 kubelet[2501]: I1031 00:40:03.945091 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:40:03.945640 kubelet[2501]: E1031 00:40:03.945478 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:03.986458 kubelet[2501]: E1031 00:40:03.986402 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.986458 kubelet[2501]: W1031 00:40:03.986440 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.986458 kubelet[2501]: E1031 00:40:03.986470 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.986906 kubelet[2501]: E1031 00:40:03.986874 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.986906 kubelet[2501]: W1031 00:40:03.986893 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.986906 kubelet[2501]: E1031 00:40:03.986905 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.987193 kubelet[2501]: E1031 00:40:03.987164 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.987193 kubelet[2501]: W1031 00:40:03.987183 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.987193 kubelet[2501]: E1031 00:40:03.987193 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.987496 kubelet[2501]: E1031 00:40:03.987479 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.987496 kubelet[2501]: W1031 00:40:03.987491 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.987554 kubelet[2501]: E1031 00:40:03.987501 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.987771 kubelet[2501]: E1031 00:40:03.987756 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.987771 kubelet[2501]: W1031 00:40:03.987767 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.987844 kubelet[2501]: E1031 00:40:03.987776 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.987980 kubelet[2501]: E1031 00:40:03.987965 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.987980 kubelet[2501]: W1031 00:40:03.987978 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.988036 kubelet[2501]: E1031 00:40:03.987986 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.988190 kubelet[2501]: E1031 00:40:03.988176 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.988190 kubelet[2501]: W1031 00:40:03.988187 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.988237 kubelet[2501]: E1031 00:40:03.988196 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.988393 kubelet[2501]: E1031 00:40:03.988379 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.988393 kubelet[2501]: W1031 00:40:03.988389 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.988435 kubelet[2501]: E1031 00:40:03.988398 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.988632 kubelet[2501]: E1031 00:40:03.988616 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.988632 kubelet[2501]: W1031 00:40:03.988627 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.988693 kubelet[2501]: E1031 00:40:03.988639 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.988851 kubelet[2501]: E1031 00:40:03.988837 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.988851 kubelet[2501]: W1031 00:40:03.988847 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.988898 kubelet[2501]: E1031 00:40:03.988856 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.989056 kubelet[2501]: E1031 00:40:03.989038 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.989056 kubelet[2501]: W1031 00:40:03.989048 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.989056 kubelet[2501]: E1031 00:40:03.989057 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.989257 kubelet[2501]: E1031 00:40:03.989243 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.989257 kubelet[2501]: W1031 00:40:03.989254 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.989305 kubelet[2501]: E1031 00:40:03.989263 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.989487 kubelet[2501]: E1031 00:40:03.989473 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.989487 kubelet[2501]: W1031 00:40:03.989483 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.989529 kubelet[2501]: E1031 00:40:03.989492 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.989768 kubelet[2501]: E1031 00:40:03.989724 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.989768 kubelet[2501]: W1031 00:40:03.989740 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.989768 kubelet[2501]: E1031 00:40:03.989748 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:03.990041 kubelet[2501]: E1031 00:40:03.989985 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:03.990041 kubelet[2501]: W1031 00:40:03.989994 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:03.990041 kubelet[2501]: E1031 00:40:03.990002 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.001302 kubelet[2501]: E1031 00:40:04.001262 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.001302 kubelet[2501]: W1031 00:40:04.001290 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008507 kubelet[2501]: E1031 00:40:04.001314 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008507 kubelet[2501]: E1031 00:40:04.001649 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008507 kubelet[2501]: W1031 00:40:04.001658 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008507 kubelet[2501]: E1031 00:40:04.001667 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008507 kubelet[2501]: E1031 00:40:04.002101 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008507 kubelet[2501]: W1031 00:40:04.002110 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008507 kubelet[2501]: E1031 00:40:04.002120 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008507 kubelet[2501]: E1031 00:40:04.002589 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008507 kubelet[2501]: W1031 00:40:04.002641 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008507 kubelet[2501]: E1031 00:40:04.002673 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008761 kubelet[2501]: E1031 00:40:04.002978 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008761 kubelet[2501]: W1031 00:40:04.002990 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008761 kubelet[2501]: E1031 00:40:04.003002 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008761 kubelet[2501]: E1031 00:40:04.003284 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008761 kubelet[2501]: W1031 00:40:04.003296 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008761 kubelet[2501]: E1031 00:40:04.003308 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008761 kubelet[2501]: E1031 00:40:04.003587 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008761 kubelet[2501]: W1031 00:40:04.003598 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008761 kubelet[2501]: E1031 00:40:04.003631 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008761 kubelet[2501]: E1031 00:40:04.003883 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008977 kubelet[2501]: W1031 00:40:04.003894 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008977 kubelet[2501]: E1031 00:40:04.003903 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008977 kubelet[2501]: E1031 00:40:04.004131 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008977 kubelet[2501]: W1031 00:40:04.004140 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008977 kubelet[2501]: E1031 00:40:04.004148 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008977 kubelet[2501]: E1031 00:40:04.004593 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008977 kubelet[2501]: W1031 00:40:04.004649 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.008977 kubelet[2501]: E1031 00:40:04.004677 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.008977 kubelet[2501]: E1031 00:40:04.004974 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.008977 kubelet[2501]: W1031 00:40:04.004983 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009208 kubelet[2501]: E1031 00:40:04.004993 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.009208 kubelet[2501]: E1031 00:40:04.005297 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.009208 kubelet[2501]: W1031 00:40:04.005306 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009208 kubelet[2501]: E1031 00:40:04.005315 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.009208 kubelet[2501]: E1031 00:40:04.005551 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.009208 kubelet[2501]: W1031 00:40:04.005563 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009208 kubelet[2501]: E1031 00:40:04.005573 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.009208 kubelet[2501]: E1031 00:40:04.005858 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.009208 kubelet[2501]: W1031 00:40:04.005868 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009208 kubelet[2501]: E1031 00:40:04.005880 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.009421 kubelet[2501]: E1031 00:40:04.006151 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.009421 kubelet[2501]: W1031 00:40:04.006162 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009421 kubelet[2501]: E1031 00:40:04.006172 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.009421 kubelet[2501]: E1031 00:40:04.006440 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.009421 kubelet[2501]: W1031 00:40:04.006450 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009421 kubelet[2501]: E1031 00:40:04.006459 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.009421 kubelet[2501]: E1031 00:40:04.006738 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.009421 kubelet[2501]: W1031 00:40:04.006746 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009421 kubelet[2501]: E1031 00:40:04.006756 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.009421 kubelet[2501]: E1031 00:40:04.007149 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:40:04.009686 kubelet[2501]: W1031 00:40:04.007158 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:40:04.009686 kubelet[2501]: E1031 00:40:04.007166 2501 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:40:04.833545 containerd[1463]: time="2025-10-31T00:40:04.833466562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:04.834584 containerd[1463]: time="2025-10-31T00:40:04.834496615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 00:40:04.835933 containerd[1463]: time="2025-10-31T00:40:04.835891413Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:04.842749 kubelet[2501]: E1031 00:40:04.842669 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:04.845594 containerd[1463]: time="2025-10-31T00:40:04.845406431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:04.846816 containerd[1463]: time="2025-10-31T00:40:04.846074695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.238756636s" Oct 31 00:40:04.846816 containerd[1463]: time="2025-10-31T00:40:04.846118908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 00:40:05.011737 containerd[1463]: time="2025-10-31T00:40:05.011659997Z" level=info msg="CreateContainer within sandbox \"c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 00:40:05.141110 containerd[1463]: time="2025-10-31T00:40:05.140929028Z" level=info msg="CreateContainer within sandbox \"c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d\"" Oct 31 00:40:05.141832 containerd[1463]: time="2025-10-31T00:40:05.141592493Z" level=info msg="StartContainer for \"7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d\"" Oct 31 00:40:05.179770 systemd[1]: Started cri-containerd-7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d.scope - libcontainer container 7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d. Oct 31 00:40:05.217083 containerd[1463]: time="2025-10-31T00:40:05.217020374Z" level=info msg="StartContainer for \"7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d\" returns successfully" Oct 31 00:40:05.228498 systemd[1]: cri-containerd-7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d.scope: Deactivated successfully. Oct 31 00:40:05.260564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d-rootfs.mount: Deactivated successfully. Oct 31 00:40:05.908508 containerd[1463]: time="2025-10-31T00:40:05.908430257Z" level=info msg="shim disconnected" id=7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d namespace=k8s.io Oct 31 00:40:05.908508 containerd[1463]: time="2025-10-31T00:40:05.908491552Z" level=warning msg="cleaning up after shim disconnected" id=7d08a8f75580473b6f0ceb5849cac5a426671b3460a37f5b54e3f7f41598538d namespace=k8s.io Oct 31 00:40:05.908508 containerd[1463]: time="2025-10-31T00:40:05.908502513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:40:05.952367 kubelet[2501]: E1031 00:40:05.951827 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:05.953243 containerd[1463]: time="2025-10-31T00:40:05.953142813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 00:40:06.841942 kubelet[2501]: E1031 00:40:06.841855 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:08.842367 kubelet[2501]: E1031 00:40:08.842275 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:10.842374 kubelet[2501]: E1031 00:40:10.842296 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:12.842228 kubelet[2501]: E1031 00:40:12.842139 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:13.125479 containerd[1463]: time="2025-10-31T00:40:13.125306893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:13.188894 containerd[1463]: time="2025-10-31T00:40:13.188828123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 00:40:13.264926 containerd[1463]: time="2025-10-31T00:40:13.264866713Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:13.377029 containerd[1463]: time="2025-10-31T00:40:13.376853411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:13.377732 containerd[1463]: time="2025-10-31T00:40:13.377705409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.424522171s" Oct 31 00:40:13.377732 containerd[1463]: time="2025-10-31T00:40:13.377731418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 00:40:13.510900 containerd[1463]: time="2025-10-31T00:40:13.510843750Z" level=info msg="CreateContainer within sandbox \"c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 00:40:14.260778 containerd[1463]: time="2025-10-31T00:40:14.260686548Z" level=info msg="CreateContainer within sandbox \"c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847\"" Oct 31 00:40:14.261505 containerd[1463]: time="2025-10-31T00:40:14.261473625Z" level=info msg="StartContainer for \"3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847\"" Oct 31 00:40:14.307929 systemd[1]: Started cri-containerd-3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847.scope - libcontainer container 3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847. Oct 31 00:40:14.345172 containerd[1463]: time="2025-10-31T00:40:14.345118217Z" level=info msg="StartContainer for \"3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847\" returns successfully" Oct 31 00:40:14.842746 kubelet[2501]: E1031 00:40:14.842643 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:14.971669 kubelet[2501]: E1031 00:40:14.971349 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:15.973703 kubelet[2501]: E1031 00:40:15.973649 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:16.072759 containerd[1463]: time="2025-10-31T00:40:16.072662461Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:40:16.076618 systemd[1]: cri-containerd-3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847.scope: Deactivated successfully. Oct 31 00:40:16.098991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847-rootfs.mount: Deactivated successfully. Oct 31 00:40:16.147469 kubelet[2501]: I1031 00:40:16.147412 2501 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 31 00:40:16.510484 containerd[1463]: time="2025-10-31T00:40:16.510396682Z" level=info msg="shim disconnected" id=3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847 namespace=k8s.io Oct 31 00:40:16.510484 containerd[1463]: time="2025-10-31T00:40:16.510471382Z" level=warning msg="cleaning up after shim disconnected" id=3f62a16e932600164aad7992883f51597c280bc35f25ea428802d7d580643847 namespace=k8s.io Oct 31 00:40:16.510484 containerd[1463]: time="2025-10-31T00:40:16.510482763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:40:16.525006 systemd[1]: Created slice kubepods-besteffort-podd4810036_8734_4e5d_affc_6c36413b2262.slice - libcontainer container kubepods-besteffort-podd4810036_8734_4e5d_affc_6c36413b2262.slice. Oct 31 00:40:16.536873 systemd[1]: Created slice kubepods-besteffort-pode0234a26_22e7_4dab_acf3_a0c995470142.slice - libcontainer container kubepods-besteffort-pode0234a26_22e7_4dab_acf3_a0c995470142.slice. Oct 31 00:40:16.547337 systemd[1]: Created slice kubepods-burstable-podfbc3f2d9_311f_49d7_b160_402ffa40a7c3.slice - libcontainer container kubepods-burstable-podfbc3f2d9_311f_49d7_b160_402ffa40a7c3.slice. Oct 31 00:40:16.555767 systemd[1]: Created slice kubepods-burstable-pod01815706_9b05_4375_91b1_4cc444b8c451.slice - libcontainer container kubepods-burstable-pod01815706_9b05_4375_91b1_4cc444b8c451.slice. Oct 31 00:40:16.562090 systemd[1]: Created slice kubepods-besteffort-podbb2918cc_8a31_4686_bd11_d009c753fde6.slice - libcontainer container kubepods-besteffort-podbb2918cc_8a31_4686_bd11_d009c753fde6.slice. Oct 31 00:40:16.569215 systemd[1]: Created slice kubepods-besteffort-pod9a28b64a_93c0_4cd4_83ea_3e73334b497e.slice - libcontainer container kubepods-besteffort-pod9a28b64a_93c0_4cd4_83ea_3e73334b497e.slice. Oct 31 00:40:16.576375 systemd[1]: Created slice kubepods-besteffort-pod87a28487_9bca_4535_a48a_e42ddac97eba.slice - libcontainer container kubepods-besteffort-pod87a28487_9bca_4535_a48a_e42ddac97eba.slice. Oct 31 00:40:16.591957 kubelet[2501]: I1031 00:40:16.591903 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4810036-8734-4e5d-affc-6c36413b2262-tigera-ca-bundle\") pod \"calico-kube-controllers-6548595d47-2xk9x\" (UID: \"d4810036-8734-4e5d-affc-6c36413b2262\") " pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" Oct 31 00:40:16.591957 kubelet[2501]: I1031 00:40:16.591948 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-backend-key-pair\") pod \"whisker-6c688b7869-4llw5\" (UID: \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\") " pod="calico-system/whisker-6c688b7869-4llw5" Oct 31 00:40:16.591957 kubelet[2501]: I1031 00:40:16.591974 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bb2918cc-8a31-4686-bd11-d009c753fde6-calico-apiserver-certs\") pod \"calico-apiserver-796b6cb4bb-6pz6b\" (UID: \"bb2918cc-8a31-4686-bd11-d009c753fde6\") " pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" Oct 31 00:40:16.592317 kubelet[2501]: I1031 00:40:16.592013 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbhj9\" (UniqueName: \"kubernetes.io/projected/d4810036-8734-4e5d-affc-6c36413b2262-kube-api-access-sbhj9\") pod \"calico-kube-controllers-6548595d47-2xk9x\" (UID: \"d4810036-8734-4e5d-affc-6c36413b2262\") " pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" Oct 31 00:40:16.592317 kubelet[2501]: I1031 00:40:16.592082 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01815706-9b05-4375-91b1-4cc444b8c451-config-volume\") pod \"coredns-66bc5c9577-9754x\" (UID: \"01815706-9b05-4375-91b1-4cc444b8c451\") " pod="kube-system/coredns-66bc5c9577-9754x" Oct 31 00:40:16.592317 kubelet[2501]: I1031 00:40:16.592103 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a28487-9bca-4535-a48a-e42ddac97eba-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-vkzq5\" (UID: \"87a28487-9bca-4535-a48a-e42ddac97eba\") " pod="calico-system/goldmane-7c778bb748-vkzq5" Oct 31 00:40:16.592317 kubelet[2501]: I1031 00:40:16.592122 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmzkc\" (UniqueName: \"kubernetes.io/projected/fbc3f2d9-311f-49d7-b160-402ffa40a7c3-kube-api-access-gmzkc\") pod \"coredns-66bc5c9577-w2k7k\" (UID: \"fbc3f2d9-311f-49d7-b160-402ffa40a7c3\") " pod="kube-system/coredns-66bc5c9577-w2k7k" Oct 31 00:40:16.592317 kubelet[2501]: I1031 00:40:16.592183 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-ca-bundle\") pod \"whisker-6c688b7869-4llw5\" (UID: \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\") " pod="calico-system/whisker-6c688b7869-4llw5" Oct 31 00:40:16.592527 kubelet[2501]: I1031 00:40:16.592249 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nzj\" (UniqueName: \"kubernetes.io/projected/9a28b64a-93c0-4cd4-83ea-3e73334b497e-kube-api-access-r7nzj\") pod \"whisker-6c688b7869-4llw5\" (UID: \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\") " pod="calico-system/whisker-6c688b7869-4llw5" Oct 31 00:40:16.592527 kubelet[2501]: I1031 00:40:16.592270 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/87a28487-9bca-4535-a48a-e42ddac97eba-goldmane-key-pair\") pod \"goldmane-7c778bb748-vkzq5\" (UID: \"87a28487-9bca-4535-a48a-e42ddac97eba\") " pod="calico-system/goldmane-7c778bb748-vkzq5" Oct 31 00:40:16.592527 kubelet[2501]: I1031 00:40:16.592294 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0234a26-22e7-4dab-acf3-a0c995470142-calico-apiserver-certs\") pod \"calico-apiserver-796b6cb4bb-vthwl\" (UID: \"e0234a26-22e7-4dab-acf3-a0c995470142\") " pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" Oct 31 00:40:16.592527 kubelet[2501]: I1031 00:40:16.592309 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvc5j\" (UniqueName: \"kubernetes.io/projected/01815706-9b05-4375-91b1-4cc444b8c451-kube-api-access-dvc5j\") pod \"coredns-66bc5c9577-9754x\" (UID: \"01815706-9b05-4375-91b1-4cc444b8c451\") " pod="kube-system/coredns-66bc5c9577-9754x" Oct 31 00:40:16.592527 kubelet[2501]: I1031 00:40:16.592339 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2xw\" (UniqueName: \"kubernetes.io/projected/bb2918cc-8a31-4686-bd11-d009c753fde6-kube-api-access-pv2xw\") pod \"calico-apiserver-796b6cb4bb-6pz6b\" (UID: \"bb2918cc-8a31-4686-bd11-d009c753fde6\") " pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" Oct 31 00:40:16.592767 kubelet[2501]: I1031 00:40:16.592370 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a28487-9bca-4535-a48a-e42ddac97eba-config\") pod \"goldmane-7c778bb748-vkzq5\" (UID: \"87a28487-9bca-4535-a48a-e42ddac97eba\") " pod="calico-system/goldmane-7c778bb748-vkzq5" Oct 31 00:40:16.592767 kubelet[2501]: I1031 00:40:16.592388 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc3f2d9-311f-49d7-b160-402ffa40a7c3-config-volume\") pod \"coredns-66bc5c9577-w2k7k\" (UID: \"fbc3f2d9-311f-49d7-b160-402ffa40a7c3\") " pod="kube-system/coredns-66bc5c9577-w2k7k" Oct 31 00:40:16.592767 kubelet[2501]: I1031 00:40:16.592403 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mktm\" (UniqueName: \"kubernetes.io/projected/87a28487-9bca-4535-a48a-e42ddac97eba-kube-api-access-8mktm\") pod \"goldmane-7c778bb748-vkzq5\" (UID: \"87a28487-9bca-4535-a48a-e42ddac97eba\") " pod="calico-system/goldmane-7c778bb748-vkzq5" Oct 31 00:40:16.592767 kubelet[2501]: I1031 00:40:16.592418 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krvrh\" (UniqueName: \"kubernetes.io/projected/e0234a26-22e7-4dab-acf3-a0c995470142-kube-api-access-krvrh\") pod \"calico-apiserver-796b6cb4bb-vthwl\" (UID: \"e0234a26-22e7-4dab-acf3-a0c995470142\") " pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" Oct 31 00:40:16.837689 containerd[1463]: time="2025-10-31T00:40:16.837536284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6548595d47-2xk9x,Uid:d4810036-8734-4e5d-affc-6c36413b2262,Namespace:calico-system,Attempt:0,}" Oct 31 00:40:16.843489 containerd[1463]: time="2025-10-31T00:40:16.843278249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-vthwl,Uid:e0234a26-22e7-4dab-acf3-a0c995470142,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:40:16.848044 systemd[1]: Created slice kubepods-besteffort-podb8404757_a167_4c06_a272_e0eda36ae575.slice - libcontainer container kubepods-besteffort-podb8404757_a167_4c06_a272_e0eda36ae575.slice. Oct 31 00:40:16.853978 containerd[1463]: time="2025-10-31T00:40:16.853946117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gj62,Uid:b8404757-a167-4c06-a272-e0eda36ae575,Namespace:calico-system,Attempt:0,}" Oct 31 00:40:16.857151 kubelet[2501]: E1031 00:40:16.857107 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:16.857505 containerd[1463]: time="2025-10-31T00:40:16.857468276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w2k7k,Uid:fbc3f2d9-311f-49d7-b160-402ffa40a7c3,Namespace:kube-system,Attempt:0,}" Oct 31 00:40:16.949638 kubelet[2501]: E1031 00:40:16.949552 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:16.950490 containerd[1463]: time="2025-10-31T00:40:16.950435568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9754x,Uid:01815706-9b05-4375-91b1-4cc444b8c451,Namespace:kube-system,Attempt:0,}" Oct 31 00:40:16.957471 containerd[1463]: time="2025-10-31T00:40:16.957411108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-6pz6b,Uid:bb2918cc-8a31-4686-bd11-d009c753fde6,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:40:16.970115 containerd[1463]: time="2025-10-31T00:40:16.970038933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vkzq5,Uid:87a28487-9bca-4535-a48a-e42ddac97eba,Namespace:calico-system,Attempt:0,}" Oct 31 00:40:16.977479 kubelet[2501]: E1031 00:40:16.977430 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:16.978410 containerd[1463]: time="2025-10-31T00:40:16.978363784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 00:40:17.202582 containerd[1463]: time="2025-10-31T00:40:17.202454454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c688b7869-4llw5,Uid:9a28b64a-93c0-4cd4-83ea-3e73334b497e,Namespace:calico-system,Attempt:0,}" Oct 31 00:40:17.234057 containerd[1463]: time="2025-10-31T00:40:17.233966776Z" level=error msg="Failed to destroy network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.234567 containerd[1463]: time="2025-10-31T00:40:17.234517018Z" level=error msg="encountered an error cleaning up failed sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.234644 containerd[1463]: time="2025-10-31T00:40:17.234586338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6548595d47-2xk9x,Uid:d4810036-8734-4e5d-affc-6c36413b2262,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.235026 kubelet[2501]: E1031 00:40:17.234967 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.235140 kubelet[2501]: E1031 00:40:17.235079 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" Oct 31 00:40:17.235140 kubelet[2501]: E1031 00:40:17.235109 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" Oct 31 00:40:17.235229 kubelet[2501]: E1031 00:40:17.235182 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6548595d47-2xk9x_calico-system(d4810036-8734-4e5d-affc-6c36413b2262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6548595d47-2xk9x_calico-system(d4810036-8734-4e5d-affc-6c36413b2262)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:40:17.236492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05-shm.mount: Deactivated successfully. Oct 31 00:40:17.518144 containerd[1463]: time="2025-10-31T00:40:17.517966772Z" level=error msg="Failed to destroy network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.518736 containerd[1463]: time="2025-10-31T00:40:17.518705559Z" level=error msg="encountered an error cleaning up failed sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.518899 containerd[1463]: time="2025-10-31T00:40:17.518863054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-vthwl,Uid:e0234a26-22e7-4dab-acf3-a0c995470142,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.519392 kubelet[2501]: E1031 00:40:17.519288 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.519392 kubelet[2501]: E1031 00:40:17.519384 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" Oct 31 00:40:17.519481 kubelet[2501]: E1031 00:40:17.519410 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" Oct 31 00:40:17.519600 kubelet[2501]: E1031 00:40:17.519487 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-796b6cb4bb-vthwl_calico-apiserver(e0234a26-22e7-4dab-acf3-a0c995470142)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-796b6cb4bb-vthwl_calico-apiserver(e0234a26-22e7-4dab-acf3-a0c995470142)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:40:17.533139 containerd[1463]: time="2025-10-31T00:40:17.533062990Z" level=error msg="Failed to destroy network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.534340 containerd[1463]: time="2025-10-31T00:40:17.534299189Z" level=error msg="encountered an error cleaning up failed sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.534572 containerd[1463]: time="2025-10-31T00:40:17.534524282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w2k7k,Uid:fbc3f2d9-311f-49d7-b160-402ffa40a7c3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.535489 kubelet[2501]: E1031 00:40:17.534940 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.535489 kubelet[2501]: E1031 00:40:17.535017 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w2k7k" Oct 31 00:40:17.535489 kubelet[2501]: E1031 00:40:17.535040 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w2k7k" Oct 31 00:40:17.535708 kubelet[2501]: E1031 00:40:17.535102 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w2k7k_kube-system(fbc3f2d9-311f-49d7-b160-402ffa40a7c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w2k7k_kube-system(fbc3f2d9-311f-49d7-b160-402ffa40a7c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w2k7k" podUID="fbc3f2d9-311f-49d7-b160-402ffa40a7c3" Oct 31 00:40:17.546464 containerd[1463]: time="2025-10-31T00:40:17.546268559Z" level=error msg="Failed to destroy network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.546950 containerd[1463]: time="2025-10-31T00:40:17.546919190Z" level=error msg="encountered an error cleaning up failed sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.547034 containerd[1463]: time="2025-10-31T00:40:17.546981477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9754x,Uid:01815706-9b05-4375-91b1-4cc444b8c451,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.548532 kubelet[2501]: E1031 00:40:17.548473 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.548656 kubelet[2501]: E1031 00:40:17.548552 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9754x" Oct 31 00:40:17.548656 kubelet[2501]: E1031 00:40:17.548579 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9754x" Oct 31 00:40:17.548746 kubelet[2501]: E1031 00:40:17.548662 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9754x_kube-system(01815706-9b05-4375-91b1-4cc444b8c451)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9754x_kube-system(01815706-9b05-4375-91b1-4cc444b8c451)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9754x" podUID="01815706-9b05-4375-91b1-4cc444b8c451" Oct 31 00:40:17.587235 containerd[1463]: time="2025-10-31T00:40:17.587162306Z" level=error msg="Failed to destroy network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.587761 containerd[1463]: time="2025-10-31T00:40:17.587722706Z" level=error msg="encountered an error cleaning up failed sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.587847 containerd[1463]: time="2025-10-31T00:40:17.587793058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gj62,Uid:b8404757-a167-4c06-a272-e0eda36ae575,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.588134 kubelet[2501]: E1031 00:40:17.588081 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.588220 kubelet[2501]: E1031 00:40:17.588165 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6gj62" Oct 31 00:40:17.588220 kubelet[2501]: E1031 00:40:17.588200 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6gj62" Oct 31 00:40:17.588318 kubelet[2501]: E1031 00:40:17.588276 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:17.592673 containerd[1463]: time="2025-10-31T00:40:17.591754703Z" level=error msg="Failed to destroy network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.592673 containerd[1463]: time="2025-10-31T00:40:17.592351562Z" level=error msg="encountered an error cleaning up failed sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.592673 containerd[1463]: time="2025-10-31T00:40:17.592430111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vkzq5,Uid:87a28487-9bca-4535-a48a-e42ddac97eba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.592863 kubelet[2501]: E1031 00:40:17.592813 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.593229 kubelet[2501]: E1031 00:40:17.593136 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-vkzq5" Oct 31 00:40:17.593229 kubelet[2501]: E1031 00:40:17.593172 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-vkzq5" Oct 31 00:40:17.594577 kubelet[2501]: E1031 00:40:17.593501 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-vkzq5_calico-system(87a28487-9bca-4535-a48a-e42ddac97eba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-vkzq5_calico-system(87a28487-9bca-4535-a48a-e42ddac97eba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:40:17.602356 containerd[1463]: time="2025-10-31T00:40:17.602299280Z" level=error msg="Failed to destroy network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.602807 containerd[1463]: time="2025-10-31T00:40:17.602765945Z" level=error msg="encountered an error cleaning up failed sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.602872 containerd[1463]: time="2025-10-31T00:40:17.602817772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-6pz6b,Uid:bb2918cc-8a31-4686-bd11-d009c753fde6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.603119 kubelet[2501]: E1031 00:40:17.603077 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.603187 kubelet[2501]: E1031 00:40:17.603141 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" Oct 31 00:40:17.603187 kubelet[2501]: E1031 00:40:17.603164 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" Oct 31 00:40:17.603276 kubelet[2501]: E1031 00:40:17.603224 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-796b6cb4bb-6pz6b_calico-apiserver(bb2918cc-8a31-4686-bd11-d009c753fde6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-796b6cb4bb-6pz6b_calico-apiserver(bb2918cc-8a31-4686-bd11-d009c753fde6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:40:17.605091 containerd[1463]: time="2025-10-31T00:40:17.605015026Z" level=error msg="Failed to destroy network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.605626 containerd[1463]: time="2025-10-31T00:40:17.605576809Z" level=error msg="encountered an error cleaning up failed sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.605719 containerd[1463]: time="2025-10-31T00:40:17.605684351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c688b7869-4llw5,Uid:9a28b64a-93c0-4cd4-83ea-3e73334b497e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.606011 kubelet[2501]: E1031 00:40:17.605956 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:17.606071 kubelet[2501]: E1031 00:40:17.606051 2501 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c688b7869-4llw5" Oct 31 00:40:17.606107 kubelet[2501]: E1031 00:40:17.606078 2501 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c688b7869-4llw5" Oct 31 00:40:17.606196 kubelet[2501]: E1031 00:40:17.606156 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c688b7869-4llw5_calico-system(9a28b64a-93c0-4cd4-83ea-3e73334b497e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c688b7869-4llw5_calico-system(9a28b64a-93c0-4cd4-83ea-3e73334b497e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c688b7869-4llw5" podUID="9a28b64a-93c0-4cd4-83ea-3e73334b497e" Oct 31 00:40:17.979676 kubelet[2501]: I1031 00:40:17.979649 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:17.980968 containerd[1463]: time="2025-10-31T00:40:17.980416935Z" level=info msg="StopPodSandbox for \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\"" Oct 31 00:40:17.980968 containerd[1463]: time="2025-10-31T00:40:17.980637309Z" level=info msg="Ensure that sandbox 16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9 in task-service has been cleanup successfully" Oct 31 00:40:17.980968 containerd[1463]: time="2025-10-31T00:40:17.980854396Z" level=info msg="StopPodSandbox for \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\"" Oct 31 00:40:17.981106 kubelet[2501]: I1031 00:40:17.980462 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:17.981142 containerd[1463]: time="2025-10-31T00:40:17.980996152Z" level=info msg="Ensure that sandbox b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a in task-service has been cleanup successfully" Oct 31 00:40:17.982101 kubelet[2501]: I1031 00:40:17.982078 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:17.983084 containerd[1463]: time="2025-10-31T00:40:17.982662649Z" level=info msg="StopPodSandbox for \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\"" Oct 31 00:40:17.983084 containerd[1463]: time="2025-10-31T00:40:17.982843749Z" level=info msg="Ensure that sandbox fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029 in task-service has been cleanup successfully" Oct 31 00:40:17.985434 kubelet[2501]: I1031 00:40:17.985396 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:17.986365 containerd[1463]: time="2025-10-31T00:40:17.986328638Z" level=info msg="StopPodSandbox for \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\"" Oct 31 00:40:17.986665 containerd[1463]: time="2025-10-31T00:40:17.986638870Z" level=info msg="Ensure that sandbox 1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec in task-service has been cleanup successfully" Oct 31 00:40:17.989020 kubelet[2501]: I1031 00:40:17.988993 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:17.989817 containerd[1463]: time="2025-10-31T00:40:17.989786046Z" level=info msg="StopPodSandbox for \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\"" Oct 31 00:40:17.990210 containerd[1463]: time="2025-10-31T00:40:17.990187068Z" level=info msg="Ensure that sandbox d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd in task-service has been cleanup successfully" Oct 31 00:40:17.990836 kubelet[2501]: I1031 00:40:17.990773 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:17.991681 containerd[1463]: time="2025-10-31T00:40:17.991584551Z" level=info msg="StopPodSandbox for \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\"" Oct 31 00:40:17.992419 containerd[1463]: time="2025-10-31T00:40:17.992389782Z" level=info msg="Ensure that sandbox 3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2 in task-service has been cleanup successfully" Oct 31 00:40:17.994363 kubelet[2501]: I1031 00:40:17.994324 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:17.995275 containerd[1463]: time="2025-10-31T00:40:17.995248566Z" level=info msg="StopPodSandbox for \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\"" Oct 31 00:40:17.995569 containerd[1463]: time="2025-10-31T00:40:17.995545503Z" level=info msg="Ensure that sandbox f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87 in task-service has been cleanup successfully" Oct 31 00:40:17.997873 kubelet[2501]: I1031 00:40:17.997828 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:17.998428 containerd[1463]: time="2025-10-31T00:40:17.998395320Z" level=info msg="StopPodSandbox for \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\"" Oct 31 00:40:17.998885 containerd[1463]: time="2025-10-31T00:40:17.998835196Z" level=info msg="Ensure that sandbox 3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05 in task-service has been cleanup successfully" Oct 31 00:40:18.071529 containerd[1463]: time="2025-10-31T00:40:18.071254808Z" level=error msg="StopPodSandbox for \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\" failed" error="failed to destroy network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.071529 containerd[1463]: time="2025-10-31T00:40:18.071465733Z" level=error msg="StopPodSandbox for \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\" failed" error="failed to destroy network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.071749 kubelet[2501]: E1031 00:40:18.071593 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:18.071803 kubelet[2501]: E1031 00:40:18.071665 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05"} Oct 31 00:40:18.071840 kubelet[2501]: E1031 00:40:18.071789 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:18.071911 kubelet[2501]: E1031 00:40:18.071864 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2"} Oct 31 00:40:18.072001 kubelet[2501]: E1031 00:40:18.071914 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.072001 kubelet[2501]: E1031 00:40:18.071946 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c688b7869-4llw5" podUID="9a28b64a-93c0-4cd4-83ea-3e73334b497e" Oct 31 00:40:18.072001 kubelet[2501]: E1031 00:40:18.071810 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4810036-8734-4e5d-affc-6c36413b2262\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.072131 kubelet[2501]: E1031 00:40:18.071989 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4810036-8734-4e5d-affc-6c36413b2262\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:40:18.072935 containerd[1463]: time="2025-10-31T00:40:18.072876862Z" level=error msg="StopPodSandbox for \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\" failed" error="failed to destroy network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.073116 kubelet[2501]: E1031 00:40:18.073034 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:18.073183 kubelet[2501]: E1031 00:40:18.073124 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd"} Oct 31 00:40:18.073183 kubelet[2501]: E1031 00:40:18.073155 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0234a26-22e7-4dab-acf3-a0c995470142\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.073276 containerd[1463]: time="2025-10-31T00:40:18.073116762Z" level=error msg="StopPodSandbox for \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\" failed" error="failed to destroy network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.073322 kubelet[2501]: E1031 00:40:18.073191 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0234a26-22e7-4dab-acf3-a0c995470142\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:40:18.073369 containerd[1463]: time="2025-10-31T00:40:18.073346502Z" level=error msg="StopPodSandbox for \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\" failed" error="failed to destroy network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.073492 kubelet[2501]: E1031 00:40:18.073465 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:18.073532 kubelet[2501]: E1031 00:40:18.073494 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029"} Oct 31 00:40:18.073532 kubelet[2501]: E1031 00:40:18.073523 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8404757-a167-4c06-a272-e0eda36ae575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.073626 kubelet[2501]: E1031 00:40:18.073550 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8404757-a167-4c06-a272-e0eda36ae575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:18.073626 kubelet[2501]: E1031 00:40:18.073369 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:18.073626 kubelet[2501]: E1031 00:40:18.073591 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec"} Oct 31 00:40:18.073721 kubelet[2501]: E1031 00:40:18.073636 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb2918cc-8a31-4686-bd11-d009c753fde6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.073721 kubelet[2501]: E1031 00:40:18.073664 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb2918cc-8a31-4686-bd11-d009c753fde6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:40:18.088896 containerd[1463]: time="2025-10-31T00:40:18.088819527Z" level=error msg="StopPodSandbox for \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\" failed" error="failed to destroy network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.089558 kubelet[2501]: E1031 00:40:18.089494 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:18.089651 kubelet[2501]: E1031 00:40:18.089569 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a"} Oct 31 00:40:18.089651 kubelet[2501]: E1031 00:40:18.089630 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbc3f2d9-311f-49d7-b160-402ffa40a7c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.089766 kubelet[2501]: E1031 00:40:18.089664 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbc3f2d9-311f-49d7-b160-402ffa40a7c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w2k7k" podUID="fbc3f2d9-311f-49d7-b160-402ffa40a7c3" Oct 31 00:40:18.091351 containerd[1463]: time="2025-10-31T00:40:18.091289933Z" level=error msg="StopPodSandbox for \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\" failed" error="failed to destroy network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.091539 kubelet[2501]: E1031 00:40:18.091497 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:18.091588 kubelet[2501]: E1031 00:40:18.091545 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9"} Oct 31 00:40:18.091588 kubelet[2501]: E1031 00:40:18.091574 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87a28487-9bca-4535-a48a-e42ddac97eba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.091692 kubelet[2501]: E1031 00:40:18.091622 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87a28487-9bca-4535-a48a-e42ddac97eba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:40:18.102210 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a-shm.mount: Deactivated successfully. Oct 31 00:40:18.102383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd-shm.mount: Deactivated successfully. Oct 31 00:40:18.104143 containerd[1463]: time="2025-10-31T00:40:18.104101794Z" level=error msg="StopPodSandbox for \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\" failed" error="failed to destroy network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:40:18.104424 kubelet[2501]: E1031 00:40:18.104376 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:18.104493 kubelet[2501]: E1031 00:40:18.104438 2501 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87"} Oct 31 00:40:18.104493 kubelet[2501]: E1031 00:40:18.104477 2501 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01815706-9b05-4375-91b1-4cc444b8c451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:40:18.104572 kubelet[2501]: E1031 00:40:18.104512 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01815706-9b05-4375-91b1-4cc444b8c451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9754x" podUID="01815706-9b05-4375-91b1-4cc444b8c451" Oct 31 00:40:24.167489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173140983.mount: Deactivated successfully. Oct 31 00:40:24.432024 kubelet[2501]: I1031 00:40:24.431870 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:40:24.432505 kubelet[2501]: E1031 00:40:24.432336 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:24.991578 containerd[1463]: time="2025-10-31T00:40:24.991501959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:24.992682 containerd[1463]: time="2025-10-31T00:40:24.992632229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 00:40:24.994737 containerd[1463]: time="2025-10-31T00:40:24.994699348Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:25.013073 kubelet[2501]: E1031 00:40:25.013046 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:25.102164 containerd[1463]: time="2025-10-31T00:40:25.102068882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:40:25.102794 containerd[1463]: time="2025-10-31T00:40:25.102748057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.124341643s" Oct 31 00:40:25.102794 containerd[1463]: time="2025-10-31T00:40:25.102777762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 00:40:25.287450 containerd[1463]: time="2025-10-31T00:40:25.287313507Z" level=info msg="CreateContainer within sandbox \"c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 00:40:26.199373 containerd[1463]: time="2025-10-31T00:40:26.199290753Z" level=info msg="CreateContainer within sandbox \"c7958e93079635356b507ceab7585df300b29ba824e4bdf8023847cf4546a54c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"493f8cb6c6bf91ad4928a76987f7e3ad3cfced75cb02138543a982097fe7e9f7\"" Oct 31 00:40:26.200213 containerd[1463]: time="2025-10-31T00:40:26.200125448Z" level=info msg="StartContainer for \"493f8cb6c6bf91ad4928a76987f7e3ad3cfced75cb02138543a982097fe7e9f7\"" Oct 31 00:40:26.293765 systemd[1]: Started cri-containerd-493f8cb6c6bf91ad4928a76987f7e3ad3cfced75cb02138543a982097fe7e9f7.scope - libcontainer container 493f8cb6c6bf91ad4928a76987f7e3ad3cfced75cb02138543a982097fe7e9f7. Oct 31 00:40:26.358804 containerd[1463]: time="2025-10-31T00:40:26.358726379Z" level=info msg="StartContainer for \"493f8cb6c6bf91ad4928a76987f7e3ad3cfced75cb02138543a982097fe7e9f7\" returns successfully" Oct 31 00:40:26.441268 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 00:40:26.441965 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 00:40:27.020519 kubelet[2501]: E1031 00:40:27.020450 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:27.862718 kubelet[2501]: I1031 00:40:27.862651 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7hvws" podStartSLOduration=4.488918235 podStartE2EDuration="31.862630259s" podCreationTimestamp="2025-10-31 00:39:56 +0000 UTC" firstStartedPulling="2025-10-31 00:39:57.730151696 +0000 UTC m=+20.004134845" lastFinishedPulling="2025-10-31 00:40:25.10386372 +0000 UTC m=+47.377846869" observedRunningTime="2025-10-31 00:40:27.54580059 +0000 UTC m=+49.819783759" watchObservedRunningTime="2025-10-31 00:40:27.862630259 +0000 UTC m=+50.136613408" Oct 31 00:40:27.864766 containerd[1463]: time="2025-10-31T00:40:27.864698680Z" level=info msg="StopPodSandbox for \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\"" Oct 31 00:40:28.022635 kubelet[2501]: E1031 00:40:28.022576 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:28.843040 containerd[1463]: time="2025-10-31T00:40:28.842909208Z" level=info msg="StopPodSandbox for \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\"" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:28.594 [INFO][3831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:28.594 [INFO][3831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" iface="eth0" netns="/var/run/netns/cni-1d482eba-cb1e-1c4f-aa1d-139ed942cec7" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:28.596 [INFO][3831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" iface="eth0" netns="/var/run/netns/cni-1d482eba-cb1e-1c4f-aa1d-139ed942cec7" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:28.597 [INFO][3831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" iface="eth0" netns="/var/run/netns/cni-1d482eba-cb1e-1c4f-aa1d-139ed942cec7" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:28.597 [INFO][3831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:28.597 [INFO][3831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:29.278 [INFO][3864] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:29.278 [INFO][3864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:29.278 [INFO][3864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:29.288 [WARNING][3864] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:29.288 [INFO][3864] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:29.291 [INFO][3864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:29.297501 containerd[1463]: 2025-10-31 00:40:29.294 [INFO][3831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:29.299112 containerd[1463]: time="2025-10-31T00:40:29.298572290Z" level=info msg="TearDown network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\" successfully" Oct 31 00:40:29.299112 containerd[1463]: time="2025-10-31T00:40:29.298626171Z" level=info msg="StopPodSandbox for \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\" returns successfully" Oct 31 00:40:29.301425 systemd[1]: run-netns-cni\x2d1d482eba\x2dcb1e\x2d1c4f\x2daa1d\x2d139ed942cec7.mount: Deactivated successfully. Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.183 [INFO][3880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.183 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" iface="eth0" netns="/var/run/netns/cni-d37100de-eac3-f5ae-a32e-7a74b50c1fb6" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.183 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" iface="eth0" netns="/var/run/netns/cni-d37100de-eac3-f5ae-a32e-7a74b50c1fb6" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.183 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" iface="eth0" netns="/var/run/netns/cni-d37100de-eac3-f5ae-a32e-7a74b50c1fb6" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.183 [INFO][3880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.183 [INFO][3880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.278 [INFO][3893] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.278 [INFO][3893] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.291 [INFO][3893] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.301 [WARNING][3893] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.302 [INFO][3893] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.304 [INFO][3893] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:29.311538 containerd[1463]: 2025-10-31 00:40:29.308 [INFO][3880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:29.339498 containerd[1463]: time="2025-10-31T00:40:29.314762537Z" level=info msg="TearDown network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\" successfully" Oct 31 00:40:29.339498 containerd[1463]: time="2025-10-31T00:40:29.314797613Z" level=info msg="StopPodSandbox for \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\" returns successfully" Oct 31 00:40:29.315348 systemd[1]: run-netns-cni\x2dd37100de\x2deac3\x2df5ae\x2da32e\x2d7a74b50c1fb6.mount: Deactivated successfully. Oct 31 00:40:29.448473 kubelet[2501]: E1031 00:40:29.448201 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:29.449970 containerd[1463]: time="2025-10-31T00:40:29.448825974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w2k7k,Uid:fbc3f2d9-311f-49d7-b160-402ffa40a7c3,Namespace:kube-system,Attempt:1,}" Oct 31 00:40:29.491915 kubelet[2501]: I1031 00:40:29.491857 2501 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-ca-bundle\") pod \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\" (UID: \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\") " Oct 31 00:40:29.491915 kubelet[2501]: I1031 00:40:29.491904 2501 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7nzj\" (UniqueName: \"kubernetes.io/projected/9a28b64a-93c0-4cd4-83ea-3e73334b497e-kube-api-access-r7nzj\") pod \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\" (UID: \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\") " Oct 31 00:40:29.491915 kubelet[2501]: I1031 00:40:29.491925 2501 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-backend-key-pair\") pod \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\" (UID: \"9a28b64a-93c0-4cd4-83ea-3e73334b497e\") " Oct 31 00:40:29.492553 kubelet[2501]: I1031 00:40:29.492494 2501 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a28b64a-93c0-4cd4-83ea-3e73334b497e" (UID: "9a28b64a-93c0-4cd4-83ea-3e73334b497e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:40:29.496899 kubelet[2501]: I1031 00:40:29.496778 2501 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a28b64a-93c0-4cd4-83ea-3e73334b497e-kube-api-access-r7nzj" (OuterVolumeSpecName: "kube-api-access-r7nzj") pod "9a28b64a-93c0-4cd4-83ea-3e73334b497e" (UID: "9a28b64a-93c0-4cd4-83ea-3e73334b497e"). InnerVolumeSpecName "kube-api-access-r7nzj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:40:29.496899 kubelet[2501]: I1031 00:40:29.496809 2501 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a28b64a-93c0-4cd4-83ea-3e73334b497e" (UID: "9a28b64a-93c0-4cd4-83ea-3e73334b497e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:40:29.498717 systemd[1]: var-lib-kubelet-pods-9a28b64a\x2d93c0\x2d4cd4\x2d83ea\x2d3e73334b497e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr7nzj.mount: Deactivated successfully. Oct 31 00:40:29.498850 systemd[1]: var-lib-kubelet-pods-9a28b64a\x2d93c0\x2d4cd4\x2d83ea\x2d3e73334b497e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 00:40:29.592598 kubelet[2501]: I1031 00:40:29.592463 2501 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 00:40:29.592598 kubelet[2501]: I1031 00:40:29.592500 2501 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r7nzj\" (UniqueName: \"kubernetes.io/projected/9a28b64a-93c0-4cd4-83ea-3e73334b497e-kube-api-access-r7nzj\") on node \"localhost\" DevicePath \"\"" Oct 31 00:40:29.592598 kubelet[2501]: I1031 00:40:29.592511 2501 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a28b64a-93c0-4cd4-83ea-3e73334b497e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 00:40:29.631718 systemd-networkd[1368]: calib73bfa0c7af: Link UP Oct 31 00:40:29.632238 systemd-networkd[1368]: calib73bfa0c7af: Gained carrier Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.538 [INFO][3913] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.552 [INFO][3913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--w2k7k-eth0 coredns-66bc5c9577- kube-system fbc3f2d9-311f-49d7-b160-402ffa40a7c3 991 0 2025-10-31 00:39:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-w2k7k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib73bfa0c7af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.553 [INFO][3913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.581 [INFO][3928] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" HandleID="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.581 [INFO][3928] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" HandleID="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-w2k7k", "timestamp":"2025-10-31 00:40:29.581088782 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.581 [INFO][3928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.581 [INFO][3928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.581 [INFO][3928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.591 [INFO][3928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.599 [INFO][3928] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.603 [INFO][3928] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.605 [INFO][3928] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.607 [INFO][3928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.607 [INFO][3928] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.608 [INFO][3928] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.612 [INFO][3928] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.617 [INFO][3928] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.617 [INFO][3928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" host="localhost" Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.617 [INFO][3928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:29.653771 containerd[1463]: 2025-10-31 00:40:29.617 [INFO][3928] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" HandleID="k8s-pod-network.08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.654598 containerd[1463]: 2025-10-31 00:40:29.621 [INFO][3913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w2k7k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc3f2d9-311f-49d7-b160-402ffa40a7c3", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-w2k7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib73bfa0c7af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:29.654598 containerd[1463]: 2025-10-31 00:40:29.621 [INFO][3913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.654598 containerd[1463]: 2025-10-31 00:40:29.621 [INFO][3913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib73bfa0c7af ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.654598 containerd[1463]: 2025-10-31 00:40:29.633 [INFO][3913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.654598 containerd[1463]: 2025-10-31 00:40:29.633 [INFO][3913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w2k7k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc3f2d9-311f-49d7-b160-402ffa40a7c3", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac", Pod:"coredns-66bc5c9577-w2k7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib73bfa0c7af", MAC:"72:84:f6:f7:2f:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:29.654598 containerd[1463]: 2025-10-31 00:40:29.648 [INFO][3913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac" Namespace="kube-system" Pod="coredns-66bc5c9577-w2k7k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:29.715583 containerd[1463]: time="2025-10-31T00:40:29.715411084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:29.715583 containerd[1463]: time="2025-10-31T00:40:29.715501353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:29.715583 containerd[1463]: time="2025-10-31T00:40:29.715520299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:29.715880 containerd[1463]: time="2025-10-31T00:40:29.715690478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:29.744809 systemd[1]: Started cri-containerd-08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac.scope - libcontainer container 08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac. Oct 31 00:40:29.757783 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:29.785303 containerd[1463]: time="2025-10-31T00:40:29.785237751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w2k7k,Uid:fbc3f2d9-311f-49d7-b160-402ffa40a7c3,Namespace:kube-system,Attempt:1,} returns sandbox id \"08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac\"" Oct 31 00:40:29.786239 kubelet[2501]: E1031 00:40:29.786196 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:29.794315 containerd[1463]: time="2025-10-31T00:40:29.794245522Z" level=info msg="CreateContainer within sandbox \"08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:40:29.815391 containerd[1463]: time="2025-10-31T00:40:29.815312752Z" level=info msg="CreateContainer within sandbox \"08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c021d7d1b3c55c86fa052a7320434c83f4994d5e862e85f5ea91e23ad05c337\"" Oct 31 00:40:29.817713 containerd[1463]: time="2025-10-31T00:40:29.817675936Z" level=info msg="StartContainer for \"2c021d7d1b3c55c86fa052a7320434c83f4994d5e862e85f5ea91e23ad05c337\"" Oct 31 00:40:29.844695 containerd[1463]: time="2025-10-31T00:40:29.844541785Z" level=info msg="StopPodSandbox for \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\"" Oct 31 00:40:29.845600 containerd[1463]: time="2025-10-31T00:40:29.845446312Z" level=info msg="StopPodSandbox for \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\"" Oct 31 00:40:29.846137 containerd[1463]: time="2025-10-31T00:40:29.845903830Z" level=info msg="StopPodSandbox for \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\"" Oct 31 00:40:29.851790 systemd[1]: Started cri-containerd-2c021d7d1b3c55c86fa052a7320434c83f4994d5e862e85f5ea91e23ad05c337.scope - libcontainer container 2c021d7d1b3c55c86fa052a7320434c83f4994d5e862e85f5ea91e23ad05c337. Oct 31 00:40:29.854956 systemd[1]: Removed slice kubepods-besteffort-pod9a28b64a_93c0_4cd4_83ea_3e73334b497e.slice - libcontainer container kubepods-besteffort-pod9a28b64a_93c0_4cd4_83ea_3e73334b497e.slice. Oct 31 00:40:29.919372 containerd[1463]: time="2025-10-31T00:40:29.919313410Z" level=info msg="StartContainer for \"2c021d7d1b3c55c86fa052a7320434c83f4994d5e862e85f5ea91e23ad05c337\" returns successfully" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.932 [INFO][4018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.933 [INFO][4018] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" iface="eth0" netns="/var/run/netns/cni-078318f4-6429-51f3-fbaf-2b4507681866" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.933 [INFO][4018] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" iface="eth0" netns="/var/run/netns/cni-078318f4-6429-51f3-fbaf-2b4507681866" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.934 [INFO][4018] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" iface="eth0" netns="/var/run/netns/cni-078318f4-6429-51f3-fbaf-2b4507681866" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.934 [INFO][4018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.934 [INFO][4018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.972 [INFO][4073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.972 [INFO][4073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.972 [INFO][4073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.985 [WARNING][4073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.985 [INFO][4073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:29.995 [INFO][4073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:30.004724 containerd[1463]: 2025-10-31 00:40:30.000 [INFO][4018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:30.004724 containerd[1463]: time="2025-10-31T00:40:30.004948058Z" level=info msg="TearDown network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\" successfully" Oct 31 00:40:30.004724 containerd[1463]: time="2025-10-31T00:40:30.004994496Z" level=info msg="StopPodSandbox for \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\" returns successfully" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.937 [INFO][4041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.937 [INFO][4041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" iface="eth0" netns="/var/run/netns/cni-9a5fdefa-0cad-0efb-6b4e-c6c20aa1088e" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.938 [INFO][4041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" iface="eth0" netns="/var/run/netns/cni-9a5fdefa-0cad-0efb-6b4e-c6c20aa1088e" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.939 [INFO][4041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" iface="eth0" netns="/var/run/netns/cni-9a5fdefa-0cad-0efb-6b4e-c6c20aa1088e" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.939 [INFO][4041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.939 [INFO][4041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.976 [INFO][4076] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.976 [INFO][4076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:29.996 [INFO][4076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:30.011 [WARNING][4076] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:30.011 [INFO][4076] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:30.013 [INFO][4076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:30.017875 containerd[1463]: 2025-10-31 00:40:30.015 [INFO][4041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:30.018968 containerd[1463]: time="2025-10-31T00:40:30.018909835Z" level=info msg="TearDown network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\" successfully" Oct 31 00:40:30.018968 containerd[1463]: time="2025-10-31T00:40:30.018952324Z" level=info msg="StopPodSandbox for \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\" returns successfully" Oct 31 00:40:30.032842 kubelet[2501]: E1031 00:40:30.032802 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:29.937 [INFO][4051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:29.938 [INFO][4051] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" iface="eth0" netns="/var/run/netns/cni-13de201d-4c8f-ad70-9229-516931408130" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:29.939 [INFO][4051] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" iface="eth0" netns="/var/run/netns/cni-13de201d-4c8f-ad70-9229-516931408130" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:29.939 [INFO][4051] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" iface="eth0" netns="/var/run/netns/cni-13de201d-4c8f-ad70-9229-516931408130" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:29.939 [INFO][4051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:29.940 [INFO][4051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:30.008 [INFO][4077] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:30.009 [INFO][4077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:30.013 [INFO][4077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:30.024 [WARNING][4077] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:30.024 [INFO][4077] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:30.026 [INFO][4077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:30.037765 containerd[1463]: 2025-10-31 00:40:30.031 [INFO][4051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:30.038708 containerd[1463]: time="2025-10-31T00:40:30.038008561Z" level=info msg="TearDown network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\" successfully" Oct 31 00:40:30.038708 containerd[1463]: time="2025-10-31T00:40:30.038044278Z" level=info msg="StopPodSandbox for \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\" returns successfully" Oct 31 00:40:30.219196 containerd[1463]: time="2025-10-31T00:40:30.217558162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6548595d47-2xk9x,Uid:d4810036-8734-4e5d-affc-6c36413b2262,Namespace:calico-system,Attempt:1,}" Oct 31 00:40:30.307854 systemd[1]: run-netns-cni\x2d13de201d\x2d4c8f\x2dad70\x2d9229\x2d516931408130.mount: Deactivated successfully. Oct 31 00:40:30.307966 systemd[1]: run-netns-cni\x2d9a5fdefa\x2d0cad\x2d0efb\x2d6b4e\x2dc6c20aa1088e.mount: Deactivated successfully. Oct 31 00:40:30.308051 systemd[1]: run-netns-cni\x2d078318f4\x2d6429\x2d51f3\x2dfbaf\x2d2b4507681866.mount: Deactivated successfully. Oct 31 00:40:30.447657 kernel: bpftool[4232]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 31 00:40:30.624432 containerd[1463]: time="2025-10-31T00:40:30.624161790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-vthwl,Uid:e0234a26-22e7-4dab-acf3-a0c995470142,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:40:31.034731 kubelet[2501]: E1031 00:40:31.034699 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:31.400948 systemd-networkd[1368]: calib73bfa0c7af: Gained IPv6LL Oct 31 00:40:31.456018 systemd-networkd[1368]: vxlan.calico: Link UP Oct 31 00:40:31.456029 systemd-networkd[1368]: vxlan.calico: Gained carrier Oct 31 00:40:31.583215 kubelet[2501]: E1031 00:40:31.583146 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:31.586697 containerd[1463]: time="2025-10-31T00:40:31.584580802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9754x,Uid:01815706-9b05-4375-91b1-4cc444b8c451,Namespace:kube-system,Attempt:1,}" Oct 31 00:40:31.717013 kubelet[2501]: I1031 00:40:31.716926 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w2k7k" podStartSLOduration=49.716905764 podStartE2EDuration="49.716905764s" podCreationTimestamp="2025-10-31 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:40:31.716186064 +0000 UTC m=+53.990169214" watchObservedRunningTime="2025-10-31 00:40:31.716905764 +0000 UTC m=+53.990888913" Oct 31 00:40:32.011430 containerd[1463]: time="2025-10-31T00:40:32.009732577Z" level=info msg="StopPodSandbox for \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\"" Oct 31 00:40:32.012788 kubelet[2501]: I1031 00:40:32.012191 2501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a28b64a-93c0-4cd4-83ea-3e73334b497e" path="/var/lib/kubelet/pods/9a28b64a-93c0-4cd4-83ea-3e73334b497e/volumes" Oct 31 00:40:32.017086 systemd[1]: Created slice kubepods-besteffort-podacc770d5_5267_4ed9_8f3a_c4a12b51e0b8.slice - libcontainer container kubepods-besteffort-podacc770d5_5267_4ed9_8f3a_c4a12b51e0b8.slice. Oct 31 00:40:32.036273 kubelet[2501]: E1031 00:40:32.036230 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:32.111453 kubelet[2501]: I1031 00:40:32.111366 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/acc770d5-5267-4ed9-8f3a-c4a12b51e0b8-whisker-backend-key-pair\") pod \"whisker-76599bb565-s49cl\" (UID: \"acc770d5-5267-4ed9-8f3a-c4a12b51e0b8\") " pod="calico-system/whisker-76599bb565-s49cl" Oct 31 00:40:32.111453 kubelet[2501]: I1031 00:40:32.111461 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acc770d5-5267-4ed9-8f3a-c4a12b51e0b8-whisker-ca-bundle\") pod \"whisker-76599bb565-s49cl\" (UID: \"acc770d5-5267-4ed9-8f3a-c4a12b51e0b8\") " pod="calico-system/whisker-76599bb565-s49cl" Oct 31 00:40:32.111898 kubelet[2501]: I1031 00:40:32.111640 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh8pb\" (UniqueName: \"kubernetes.io/projected/acc770d5-5267-4ed9-8f3a-c4a12b51e0b8-kube-api-access-lh8pb\") pod \"whisker-76599bb565-s49cl\" (UID: \"acc770d5-5267-4ed9-8f3a-c4a12b51e0b8\") " pod="calico-system/whisker-76599bb565-s49cl" Oct 31 00:40:32.381945 containerd[1463]: time="2025-10-31T00:40:32.381813371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76599bb565-s49cl,Uid:acc770d5-5267-4ed9-8f3a-c4a12b51e0b8,Namespace:calico-system,Attempt:0,}" Oct 31 00:40:32.614062 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:41216.service - OpenSSH per-connection server daemon (10.0.0.1:41216). Oct 31 00:40:32.680005 systemd-networkd[1368]: caliba3e4f64233: Link UP Oct 31 00:40:32.680885 systemd-networkd[1368]: caliba3e4f64233: Gained carrier Oct 31 00:40:32.693976 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 41216 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.521 [INFO][4322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.521 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" iface="eth0" netns="/var/run/netns/cni-a95765f2-46b2-7e9d-6111-78b8cd680ba5" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.521 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" iface="eth0" netns="/var/run/netns/cni-a95765f2-46b2-7e9d-6111-78b8cd680ba5" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.522 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" iface="eth0" netns="/var/run/netns/cni-a95765f2-46b2-7e9d-6111-78b8cd680ba5" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.522 [INFO][4322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.522 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.595 [INFO][4376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.595 [INFO][4376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.662 [INFO][4376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.675 [WARNING][4376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.675 [INFO][4376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.678 [INFO][4376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:32.695665 containerd[1463]: 2025-10-31 00:40:32.684 [INFO][4322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:32.696127 containerd[1463]: time="2025-10-31T00:40:32.695893364Z" level=info msg="TearDown network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\" successfully" Oct 31 00:40:32.696127 containerd[1463]: time="2025-10-31T00:40:32.695935276Z" level=info msg="StopPodSandbox for \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\" returns successfully" Oct 31 00:40:32.696471 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:32.705177 containerd[1463]: time="2025-10-31T00:40:32.705082957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-6pz6b,Uid:bb2918cc-8a31-4686-bd11-d009c753fde6,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:40:32.705699 systemd-logind[1448]: New session 8 of user core. Oct 31 00:40:32.711858 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.539 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0 calico-apiserver-796b6cb4bb- calico-apiserver e0234a26-22e7-4dab-acf3-a0c995470142 1006 0 2025-10-31 00:39:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:796b6cb4bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-796b6cb4bb-vthwl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba3e4f64233 [] [] }} ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.539 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.587 [INFO][4385] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" HandleID="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.588 [INFO][4385] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" HandleID="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043b520), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-796b6cb4bb-vthwl", "timestamp":"2025-10-31 00:40:32.587450899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.589 [INFO][4385] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.589 [INFO][4385] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.589 [INFO][4385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.604 [INFO][4385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.624 [INFO][4385] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.633 [INFO][4385] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.637 [INFO][4385] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.640 [INFO][4385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.641 [INFO][4385] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.645 [INFO][4385] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3 Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.652 [INFO][4385] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.662 [INFO][4385] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.662 [INFO][4385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" host="localhost" Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.662 [INFO][4385] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:32.712492 containerd[1463]: 2025-10-31 00:40:32.663 [INFO][4385] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" HandleID="k8s-pod-network.986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:32.713193 containerd[1463]: 2025-10-31 00:40:32.668 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0234a26-22e7-4dab-acf3-a0c995470142", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-796b6cb4bb-vthwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba3e4f64233", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:32.713193 containerd[1463]: 2025-10-31 00:40:32.668 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:32.713193 containerd[1463]: 2025-10-31 00:40:32.669 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba3e4f64233 ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:32.713193 containerd[1463]: 2025-10-31 00:40:32.681 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:32.713193 containerd[1463]: 2025-10-31 00:40:32.682 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0234a26-22e7-4dab-acf3-a0c995470142", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3", Pod:"calico-apiserver-796b6cb4bb-vthwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba3e4f64233", MAC:"ca:e4:e4:8f:b5:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:32.713193 containerd[1463]: 2025-10-31 00:40:32.703 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-vthwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:32.757452 containerd[1463]: time="2025-10-31T00:40:32.757283883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:32.757452 containerd[1463]: time="2025-10-31T00:40:32.757366813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:32.757452 containerd[1463]: time="2025-10-31T00:40:32.757380540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:32.757981 containerd[1463]: time="2025-10-31T00:40:32.757497397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:32.766134 systemd-networkd[1368]: cali7b4f7222e81: Link UP Oct 31 00:40:32.766356 systemd-networkd[1368]: cali7b4f7222e81: Gained carrier Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.533 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0 calico-kube-controllers-6548595d47- calico-system d4810036-8734-4e5d-affc-6c36413b2262 1005 0 2025-10-31 00:39:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6548595d47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6548595d47-2xk9x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7b4f7222e81 [] [] }} ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.534 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.601 [INFO][4389] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" HandleID="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.601 [INFO][4389] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" HandleID="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6548595d47-2xk9x", "timestamp":"2025-10-31 00:40:32.601169044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.601 [INFO][4389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.678 [INFO][4389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.678 [INFO][4389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.704 [INFO][4389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.729 [INFO][4389] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.737 [INFO][4389] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.740 [INFO][4389] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.742 [INFO][4389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.742 [INFO][4389] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.744 [INFO][4389] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.751 [INFO][4389] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.758 [INFO][4389] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.758 [INFO][4389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" host="localhost" Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.758 [INFO][4389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:32.798321 containerd[1463]: 2025-10-31 00:40:32.758 [INFO][4389] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" HandleID="k8s-pod-network.1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:32.799162 containerd[1463]: 2025-10-31 00:40:32.763 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0", GenerateName:"calico-kube-controllers-6548595d47-", Namespace:"calico-system", SelfLink:"", UID:"d4810036-8734-4e5d-affc-6c36413b2262", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6548595d47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6548595d47-2xk9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b4f7222e81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:32.799162 containerd[1463]: 2025-10-31 00:40:32.763 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:32.799162 containerd[1463]: 2025-10-31 00:40:32.763 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b4f7222e81 ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:32.799162 containerd[1463]: 2025-10-31 00:40:32.765 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:32.799162 containerd[1463]: 2025-10-31 00:40:32.768 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0", GenerateName:"calico-kube-controllers-6548595d47-", Namespace:"calico-system", SelfLink:"", UID:"d4810036-8734-4e5d-affc-6c36413b2262", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6548595d47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f", Pod:"calico-kube-controllers-6548595d47-2xk9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b4f7222e81", MAC:"6a:3c:91:63:66:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:32.799162 containerd[1463]: 2025-10-31 00:40:32.791 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f" Namespace="calico-system" Pod="calico-kube-controllers-6548595d47-2xk9x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:32.808124 systemd[1]: run-netns-cni\x2da95765f2\x2d46b2\x2d7e9d\x2d6111\x2d78b8cd680ba5.mount: Deactivated successfully. Oct 31 00:40:32.847737 containerd[1463]: time="2025-10-31T00:40:32.847679409Z" level=info msg="StopPodSandbox for \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\"" Oct 31 00:40:32.850825 containerd[1463]: time="2025-10-31T00:40:32.850783924Z" level=info msg="StopPodSandbox for \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\"" Oct 31 00:40:32.851860 systemd[1]: Started cri-containerd-986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3.scope - libcontainer container 986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3. Oct 31 00:40:32.874487 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Oct 31 00:40:32.906974 containerd[1463]: time="2025-10-31T00:40:32.905791835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:32.915411 containerd[1463]: time="2025-10-31T00:40:32.913146160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:32.915411 containerd[1463]: time="2025-10-31T00:40:32.913190787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:32.915411 containerd[1463]: time="2025-10-31T00:40:32.913327553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:32.964343 systemd-networkd[1368]: calic58522cdbd4: Link UP Oct 31 00:40:32.972937 systemd-networkd[1368]: calic58522cdbd4: Gained carrier Oct 31 00:40:33.025063 sshd[4422]: pam_unix(sshd:session): session closed for user core Oct 31 00:40:33.025522 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:33.031852 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:41216.service: Deactivated successfully. Oct 31 00:40:33.035862 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:40:33.036933 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.539 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--9754x-eth0 coredns-66bc5c9577- kube-system 01815706-9b05-4375-91b1-4cc444b8c451 1007 0 2025-10-31 00:39:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-9754x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic58522cdbd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.539 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.621 [INFO][4388] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" HandleID="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.622 [INFO][4388] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" HandleID="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bfb20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-9754x", "timestamp":"2025-10-31 00:40:32.621973994 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.622 [INFO][4388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.759 [INFO][4388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.759 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.808 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.827 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.842 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.850 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.856 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.857 [INFO][4388] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.862 [INFO][4388] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662 Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.873 [INFO][4388] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.894 [INFO][4388] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.917 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" host="localhost" Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.917 [INFO][4388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:33.042260 containerd[1463]: 2025-10-31 00:40:32.917 [INFO][4388] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" HandleID="k8s-pod-network.77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:33.046164 containerd[1463]: 2025-10-31 00:40:32.946 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9754x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"01815706-9b05-4375-91b1-4cc444b8c451", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-9754x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic58522cdbd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.046164 containerd[1463]: 2025-10-31 00:40:32.946 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:33.046164 containerd[1463]: 2025-10-31 00:40:32.946 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic58522cdbd4 ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:33.046164 containerd[1463]: 2025-10-31 00:40:32.985 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:33.046164 containerd[1463]: 2025-10-31 00:40:33.000 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9754x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"01815706-9b05-4375-91b1-4cc444b8c451", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662", Pod:"coredns-66bc5c9577-9754x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic58522cdbd4", MAC:"ae:65:1a:ab:29:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.046164 containerd[1463]: 2025-10-31 00:40:33.026 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662" Namespace="kube-system" Pod="coredns-66bc5c9577-9754x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:33.044905 systemd[1]: Started cri-containerd-1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f.scope - libcontainer container 1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f. Oct 31 00:40:33.046712 systemd-logind[1448]: Removed session 8. Oct 31 00:40:33.072638 containerd[1463]: time="2025-10-31T00:40:33.072453330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-vthwl,Uid:e0234a26-22e7-4dab-acf3-a0c995470142,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3\"" Oct 31 00:40:33.076018 containerd[1463]: time="2025-10-31T00:40:33.075777435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:40:33.088995 systemd-networkd[1368]: cali75b412be47c: Link UP Oct 31 00:40:33.090326 systemd-networkd[1368]: cali75b412be47c: Gained carrier Oct 31 00:40:33.100358 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.632 [INFO][4404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76599bb565--s49cl-eth0 whisker-76599bb565- calico-system acc770d5-5267-4ed9-8f3a-c4a12b51e0b8 1047 0 2025-10-31 00:40:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76599bb565 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76599bb565-s49cl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali75b412be47c [] [] }} ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.633 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-eth0" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.690 [INFO][4428] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" HandleID="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Workload="localhost-k8s-whisker--76599bb565--s49cl-eth0" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.690 [INFO][4428] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" HandleID="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Workload="localhost-k8s-whisker--76599bb565--s49cl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003075c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76599bb565-s49cl", "timestamp":"2025-10-31 00:40:32.690158617 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.690 [INFO][4428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.918 [INFO][4428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.918 [INFO][4428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.954 [INFO][4428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:32.993 [INFO][4428] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.026 [INFO][4428] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.043 [INFO][4428] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.048 [INFO][4428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.048 [INFO][4428] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.051 [INFO][4428] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037 Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.059 [INFO][4428] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.070 [INFO][4428] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.070 [INFO][4428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" host="localhost" Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.070 [INFO][4428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:33.110991 containerd[1463]: 2025-10-31 00:40:33.071 [INFO][4428] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" HandleID="k8s-pod-network.498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Workload="localhost-k8s-whisker--76599bb565--s49cl-eth0" Oct 31 00:40:33.111757 containerd[1463]: 2025-10-31 00:40:33.084 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76599bb565--s49cl-eth0", GenerateName:"whisker-76599bb565-", Namespace:"calico-system", SelfLink:"", UID:"acc770d5-5267-4ed9-8f3a-c4a12b51e0b8", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 40, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76599bb565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76599bb565-s49cl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali75b412be47c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.111757 containerd[1463]: 2025-10-31 00:40:33.084 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-eth0" Oct 31 00:40:33.111757 containerd[1463]: 2025-10-31 00:40:33.084 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75b412be47c ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-eth0" Oct 31 00:40:33.111757 containerd[1463]: 2025-10-31 00:40:33.091 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-eth0" Oct 31 00:40:33.111757 containerd[1463]: 2025-10-31 00:40:33.091 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76599bb565--s49cl-eth0", GenerateName:"whisker-76599bb565-", Namespace:"calico-system", SelfLink:"", UID:"acc770d5-5267-4ed9-8f3a-c4a12b51e0b8", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 40, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76599bb565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037", Pod:"whisker-76599bb565-s49cl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali75b412be47c", MAC:"da:47:60:c7:1c:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.111757 containerd[1463]: 2025-10-31 00:40:33.105 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037" Namespace="calico-system" Pod="whisker-76599bb565-s49cl" WorkloadEndpoint="localhost-k8s-whisker--76599bb565--s49cl-eth0" Oct 31 00:40:33.156880 containerd[1463]: time="2025-10-31T00:40:33.156690008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:33.157163 containerd[1463]: time="2025-10-31T00:40:33.156858214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:33.157163 containerd[1463]: time="2025-10-31T00:40:33.156885347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.157163 containerd[1463]: time="2025-10-31T00:40:33.157009538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.184150 containerd[1463]: time="2025-10-31T00:40:33.184005752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6548595d47-2xk9x,Uid:d4810036-8734-4e5d-affc-6c36413b2262,Namespace:calico-system,Attempt:1,} returns sandbox id \"1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f\"" Oct 31 00:40:33.189888 systemd[1]: Started cri-containerd-77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662.scope - libcontainer container 77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662. Oct 31 00:40:33.195760 systemd-networkd[1368]: calia41d1875609: Link UP Oct 31 00:40:33.197790 systemd-networkd[1368]: calia41d1875609: Gained carrier Oct 31 00:40:33.210197 containerd[1463]: time="2025-10-31T00:40:33.210100695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:33.210412 containerd[1463]: time="2025-10-31T00:40:33.210160411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:33.210412 containerd[1463]: time="2025-10-31T00:40:33.210200820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.210412 containerd[1463]: time="2025-10-31T00:40:33.210302617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.213907 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:33.234828 systemd[1]: Started cri-containerd-498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037.scope - libcontainer container 498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037. Oct 31 00:40:33.249176 containerd[1463]: time="2025-10-31T00:40:33.249135447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9754x,Uid:01815706-9b05-4375-91b1-4cc444b8c451,Namespace:kube-system,Attempt:1,} returns sandbox id \"77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662\"" Oct 31 00:40:33.250078 kubelet[2501]: E1031 00:40:33.249889 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:33.259832 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:33.264964 containerd[1463]: time="2025-10-31T00:40:33.264165242Z" level=info msg="CreateContainer within sandbox \"77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:32.834 [INFO][4454] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0 calico-apiserver-796b6cb4bb- calico-apiserver bb2918cc-8a31-4686-bd11-d009c753fde6 1063 0 2025-10-31 00:39:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:796b6cb4bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-796b6cb4bb-6pz6b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia41d1875609 [] [] }} ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:32.834 [INFO][4454] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.037 [INFO][4525] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" HandleID="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.041 [INFO][4525] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" HandleID="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-796b6cb4bb-6pz6b", "timestamp":"2025-10-31 00:40:33.036827128 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.042 [INFO][4525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.071 [INFO][4525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.071 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.101 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.116 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.124 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.127 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.131 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.131 [INFO][4525] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.133 [INFO][4525] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5 Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.150 [INFO][4525] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.171 [INFO][4525] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.171 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" host="localhost" Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.172 [INFO][4525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:33.272356 containerd[1463]: 2025-10-31 00:40:33.172 [INFO][4525] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" HandleID="k8s-pod-network.4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:33.273410 containerd[1463]: 2025-10-31 00:40:33.190 [INFO][4454] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb2918cc-8a31-4686-bd11-d009c753fde6", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-796b6cb4bb-6pz6b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia41d1875609", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.273410 containerd[1463]: 2025-10-31 00:40:33.190 [INFO][4454] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:33.273410 containerd[1463]: 2025-10-31 00:40:33.190 [INFO][4454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia41d1875609 ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:33.273410 containerd[1463]: 2025-10-31 00:40:33.199 [INFO][4454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:33.273410 containerd[1463]: 2025-10-31 00:40:33.200 [INFO][4454] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb2918cc-8a31-4686-bd11-d009c753fde6", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5", Pod:"calico-apiserver-796b6cb4bb-6pz6b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia41d1875609", MAC:"56:19:27:a0:00:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.273410 containerd[1463]: 2025-10-31 00:40:33.266 [INFO][4454] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5" Namespace="calico-apiserver" Pod="calico-apiserver-796b6cb4bb-6pz6b" WorkloadEndpoint="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.083 [INFO][4540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.083 [INFO][4540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" iface="eth0" netns="/var/run/netns/cni-6f761dea-289e-3808-e752-e3d6e7b39d78" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.085 [INFO][4540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" iface="eth0" netns="/var/run/netns/cni-6f761dea-289e-3808-e752-e3d6e7b39d78" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.088 [INFO][4540] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" iface="eth0" netns="/var/run/netns/cni-6f761dea-289e-3808-e752-e3d6e7b39d78" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.088 [INFO][4540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.089 [INFO][4540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.198 [INFO][4622] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.198 [INFO][4622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.198 [INFO][4622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.263 [WARNING][4622] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.263 [INFO][4622] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.267 [INFO][4622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:33.275265 containerd[1463]: 2025-10-31 00:40:33.270 [INFO][4540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:33.275823 containerd[1463]: time="2025-10-31T00:40:33.275423385Z" level=info msg="TearDown network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\" successfully" Oct 31 00:40:33.275823 containerd[1463]: time="2025-10-31T00:40:33.275464013Z" level=info msg="StopPodSandbox for \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\" returns successfully" Oct 31 00:40:33.282286 containerd[1463]: time="2025-10-31T00:40:33.282241624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gj62,Uid:b8404757-a167-4c06-a272-e0eda36ae575,Namespace:calico-system,Attempt:1,}" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.101 [INFO][4556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.102 [INFO][4556] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" iface="eth0" netns="/var/run/netns/cni-e073c7dd-4de0-51c1-fa67-1514cd21f014" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.102 [INFO][4556] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" iface="eth0" netns="/var/run/netns/cni-e073c7dd-4de0-51c1-fa67-1514cd21f014" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.103 [INFO][4556] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" iface="eth0" netns="/var/run/netns/cni-e073c7dd-4de0-51c1-fa67-1514cd21f014" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.103 [INFO][4556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.103 [INFO][4556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.211 [INFO][4627] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.211 [INFO][4627] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.267 [INFO][4627] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.277 [WARNING][4627] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.277 [INFO][4627] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.280 [INFO][4627] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:33.292692 containerd[1463]: 2025-10-31 00:40:33.286 [INFO][4556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:33.293716 containerd[1463]: time="2025-10-31T00:40:33.293215084Z" level=info msg="TearDown network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\" successfully" Oct 31 00:40:33.293716 containerd[1463]: time="2025-10-31T00:40:33.293247126Z" level=info msg="StopPodSandbox for \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\" returns successfully" Oct 31 00:40:33.298540 containerd[1463]: time="2025-10-31T00:40:33.298374513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vkzq5,Uid:87a28487-9bca-4535-a48a-e42ddac97eba,Namespace:calico-system,Attempt:1,}" Oct 31 00:40:33.301518 containerd[1463]: time="2025-10-31T00:40:33.301473160Z" level=info msg="CreateContainer within sandbox \"77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d809689cef96841f61889dec64983b0fd8a74f26e608ec7ec58e4c69d495187\"" Oct 31 00:40:33.302857 containerd[1463]: time="2025-10-31T00:40:33.302810066Z" level=info msg="StartContainer for \"4d809689cef96841f61889dec64983b0fd8a74f26e608ec7ec58e4c69d495187\"" Oct 31 00:40:33.310874 containerd[1463]: time="2025-10-31T00:40:33.310774031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:33.311032 containerd[1463]: time="2025-10-31T00:40:33.310839969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:33.311032 containerd[1463]: time="2025-10-31T00:40:33.310863245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.311032 containerd[1463]: time="2025-10-31T00:40:33.310953820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.314120 containerd[1463]: time="2025-10-31T00:40:33.313872407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76599bb565-s49cl,Uid:acc770d5-5267-4ed9-8f3a-c4a12b51e0b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"498e10f16c848accc26582123fe20ddedcaab8a48b330fdeb62fb1df9b42c037\"" Oct 31 00:40:33.343046 systemd[1]: Started cri-containerd-4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5.scope - libcontainer container 4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5. Oct 31 00:40:33.349028 systemd[1]: Started cri-containerd-4d809689cef96841f61889dec64983b0fd8a74f26e608ec7ec58e4c69d495187.scope - libcontainer container 4d809689cef96841f61889dec64983b0fd8a74f26e608ec7ec58e4c69d495187. Oct 31 00:40:33.378528 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:33.417412 containerd[1463]: time="2025-10-31T00:40:33.417367073Z" level=info msg="StartContainer for \"4d809689cef96841f61889dec64983b0fd8a74f26e608ec7ec58e4c69d495187\" returns successfully" Oct 31 00:40:33.432705 containerd[1463]: time="2025-10-31T00:40:33.432309239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796b6cb4bb-6pz6b,Uid:bb2918cc-8a31-4686-bd11-d009c753fde6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5\"" Oct 31 00:40:33.477306 containerd[1463]: time="2025-10-31T00:40:33.477078666Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:33.490813 containerd[1463]: time="2025-10-31T00:40:33.478513932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:40:33.491010 containerd[1463]: time="2025-10-31T00:40:33.478631170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:40:33.491327 kubelet[2501]: E1031 00:40:33.491252 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:33.491397 kubelet[2501]: E1031 00:40:33.491331 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:33.492449 kubelet[2501]: E1031 00:40:33.491602 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-796b6cb4bb-vthwl_calico-apiserver(e0234a26-22e7-4dab-acf3-a0c995470142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:33.492449 kubelet[2501]: E1031 00:40:33.491686 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:40:33.494413 containerd[1463]: time="2025-10-31T00:40:33.492838149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:40:33.514276 systemd-networkd[1368]: calif2abb0773e4: Link UP Oct 31 00:40:33.515139 systemd-networkd[1368]: calif2abb0773e4: Gained carrier Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.391 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6gj62-eth0 csi-node-driver- calico-system b8404757-a167-4c06-a272-e0eda36ae575 1083 0 2025-10-31 00:39:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6gj62 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif2abb0773e4 [] [] }} ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.391 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.450 [INFO][4829] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" HandleID="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.451 [INFO][4829] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" HandleID="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6gj62", "timestamp":"2025-10-31 00:40:33.450851194 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.451 [INFO][4829] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.451 [INFO][4829] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.451 [INFO][4829] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.461 [INFO][4829] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.472 [INFO][4829] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.479 [INFO][4829] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.481 [INFO][4829] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.484 [INFO][4829] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.484 [INFO][4829] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.485 [INFO][4829] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8 Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.489 [INFO][4829] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.504 [INFO][4829] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.504 [INFO][4829] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" host="localhost" Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.504 [INFO][4829] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:33.533311 containerd[1463]: 2025-10-31 00:40:33.504 [INFO][4829] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" HandleID="k8s-pod-network.c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.534030 containerd[1463]: 2025-10-31 00:40:33.509 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6gj62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8404757-a167-4c06-a272-e0eda36ae575", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6gj62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2abb0773e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.534030 containerd[1463]: 2025-10-31 00:40:33.509 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.534030 containerd[1463]: 2025-10-31 00:40:33.509 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2abb0773e4 ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.534030 containerd[1463]: 2025-10-31 00:40:33.515 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.534030 containerd[1463]: 2025-10-31 00:40:33.516 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6gj62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8404757-a167-4c06-a272-e0eda36ae575", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8", Pod:"csi-node-driver-6gj62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2abb0773e4", MAC:"e2:26:50:3c:b7:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.534030 containerd[1463]: 2025-10-31 00:40:33.527 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8" Namespace="calico-system" Pod="csi-node-driver-6gj62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:33.556387 containerd[1463]: time="2025-10-31T00:40:33.555999010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:33.556387 containerd[1463]: time="2025-10-31T00:40:33.556139072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:33.556387 containerd[1463]: time="2025-10-31T00:40:33.556175372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.556387 containerd[1463]: time="2025-10-31T00:40:33.556301988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.582479 systemd[1]: Started cri-containerd-c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8.scope - libcontainer container c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8. Oct 31 00:40:33.603690 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:33.618750 containerd[1463]: time="2025-10-31T00:40:33.618711270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gj62,Uid:b8404757-a167-4c06-a272-e0eda36ae575,Namespace:calico-system,Attempt:1,} returns sandbox id \"c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8\"" Oct 31 00:40:33.645293 systemd-networkd[1368]: cali3214c0070d8: Link UP Oct 31 00:40:33.646093 systemd-networkd[1368]: cali3214c0070d8: Gained carrier Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.430 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--vkzq5-eth0 goldmane-7c778bb748- calico-system 87a28487-9bca-4535-a48a-e42ddac97eba 1085 0 2025-10-31 00:39:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-vkzq5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3214c0070d8 [] [] }} ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.430 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.477 [INFO][4852] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" HandleID="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.478 [INFO][4852] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" HandleID="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-vkzq5", "timestamp":"2025-10-31 00:40:33.477908066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.479 [INFO][4852] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.505 [INFO][4852] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.506 [INFO][4852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.563 [INFO][4852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.573 [INFO][4852] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.577 [INFO][4852] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.579 [INFO][4852] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.581 [INFO][4852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.581 [INFO][4852] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.583 [INFO][4852] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71 Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.593 [INFO][4852] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.636 [INFO][4852] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.636 [INFO][4852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" host="localhost" Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.636 [INFO][4852] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:33.724671 containerd[1463]: 2025-10-31 00:40:33.636 [INFO][4852] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" HandleID="k8s-pod-network.074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.725589 containerd[1463]: 2025-10-31 00:40:33.640 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--vkzq5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87a28487-9bca-4535-a48a-e42ddac97eba", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-vkzq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3214c0070d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.725589 containerd[1463]: 2025-10-31 00:40:33.640 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.725589 containerd[1463]: 2025-10-31 00:40:33.640 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3214c0070d8 ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.725589 containerd[1463]: 2025-10-31 00:40:33.645 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.725589 containerd[1463]: 2025-10-31 00:40:33.646 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--vkzq5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87a28487-9bca-4535-a48a-e42ddac97eba", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71", Pod:"goldmane-7c778bb748-vkzq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3214c0070d8", MAC:"7e:46:bf:48:5c:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:33.725589 containerd[1463]: 2025-10-31 00:40:33.720 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71" Namespace="calico-system" Pod="goldmane-7c778bb748-vkzq5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:33.794457 containerd[1463]: time="2025-10-31T00:40:33.794243693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:40:33.794457 containerd[1463]: time="2025-10-31T00:40:33.794327506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:40:33.794457 containerd[1463]: time="2025-10-31T00:40:33.794347925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.796717 containerd[1463]: time="2025-10-31T00:40:33.796637853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:40:33.810583 systemd[1]: run-netns-cni\x2de073c7dd\x2d4de0\x2d51c1\x2dfa67\x2d1514cd21f014.mount: Deactivated successfully. Oct 31 00:40:33.810705 systemd[1]: run-netns-cni\x2d6f761dea\x2d289e\x2d3808\x2de752\x2de3d6e7b39d78.mount: Deactivated successfully. Oct 31 00:40:33.831785 systemd[1]: Started cri-containerd-074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71.scope - libcontainer container 074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71. Oct 31 00:40:33.846352 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:40:33.853780 containerd[1463]: time="2025-10-31T00:40:33.853732964Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:33.872410 containerd[1463]: time="2025-10-31T00:40:33.872367540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vkzq5,Uid:87a28487-9bca-4535-a48a-e42ddac97eba,Namespace:calico-system,Attempt:1,} returns sandbox id \"074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71\"" Oct 31 00:40:33.894205 containerd[1463]: time="2025-10-31T00:40:33.894122336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:40:33.894306 containerd[1463]: time="2025-10-31T00:40:33.894168565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:40:33.894474 kubelet[2501]: E1031 00:40:33.894414 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:40:33.894474 kubelet[2501]: E1031 00:40:33.894472 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:40:33.894826 kubelet[2501]: E1031 00:40:33.894780 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6548595d47-2xk9x_calico-system(d4810036-8734-4e5d-affc-6c36413b2262): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:33.894940 kubelet[2501]: E1031 00:40:33.894843 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:40:33.894984 containerd[1463]: time="2025-10-31T00:40:33.894825191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:40:34.024955 systemd-networkd[1368]: caliba3e4f64233: Gained IPv6LL Oct 31 00:40:34.085940 kubelet[2501]: E1031 00:40:34.085764 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:40:34.087829 kubelet[2501]: E1031 00:40:34.087792 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:34.089437 kubelet[2501]: E1031 00:40:34.089410 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:40:34.152807 systemd-networkd[1368]: cali75b412be47c: Gained IPv6LL Oct 31 00:40:34.382802 containerd[1463]: time="2025-10-31T00:40:34.382657592Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:34.449363 containerd[1463]: time="2025-10-31T00:40:34.449242341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:40:34.449363 containerd[1463]: time="2025-10-31T00:40:34.449302999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:40:34.449767 kubelet[2501]: E1031 00:40:34.449716 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:40:34.450219 kubelet[2501]: E1031 00:40:34.449770 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:40:34.450219 kubelet[2501]: E1031 00:40:34.450006 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-76599bb565-s49cl_calico-system(acc770d5-5267-4ed9-8f3a-c4a12b51e0b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:34.450328 containerd[1463]: time="2025-10-31T00:40:34.450285555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:40:34.568734 kubelet[2501]: I1031 00:40:34.568649 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9754x" podStartSLOduration=52.568631284 podStartE2EDuration="52.568631284s" podCreationTimestamp="2025-10-31 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:40:34.568256818 +0000 UTC m=+56.842239967" watchObservedRunningTime="2025-10-31 00:40:34.568631284 +0000 UTC m=+56.842614433" Oct 31 00:40:34.600898 systemd-networkd[1368]: cali7b4f7222e81: Gained IPv6LL Oct 31 00:40:34.856875 systemd-networkd[1368]: calia41d1875609: Gained IPv6LL Oct 31 00:40:34.857777 systemd-networkd[1368]: calif2abb0773e4: Gained IPv6LL Oct 31 00:40:34.885847 containerd[1463]: time="2025-10-31T00:40:34.885791358Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:34.929127 containerd[1463]: time="2025-10-31T00:40:34.929037018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:40:34.929127 containerd[1463]: time="2025-10-31T00:40:34.929080622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:40:34.929460 kubelet[2501]: E1031 00:40:34.929398 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:34.929512 kubelet[2501]: E1031 00:40:34.929462 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:34.929756 kubelet[2501]: E1031 00:40:34.929701 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-796b6cb4bb-6pz6b_calico-apiserver(bb2918cc-8a31-4686-bd11-d009c753fde6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:34.929840 kubelet[2501]: E1031 00:40:34.929779 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:40:34.930189 containerd[1463]: time="2025-10-31T00:40:34.929951682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:40:35.048816 systemd-networkd[1368]: calic58522cdbd4: Gained IPv6LL Oct 31 00:40:35.092716 kubelet[2501]: E1031 00:40:35.092514 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:40:35.092716 kubelet[2501]: E1031 00:40:35.092631 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:35.093280 kubelet[2501]: E1031 00:40:35.093203 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:40:35.093280 kubelet[2501]: E1031 00:40:35.093222 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:40:35.368756 containerd[1463]: time="2025-10-31T00:40:35.368672337Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:35.446287 containerd[1463]: time="2025-10-31T00:40:35.446193386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:40:35.446923 containerd[1463]: time="2025-10-31T00:40:35.446240798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:40:35.446967 kubelet[2501]: E1031 00:40:35.446498 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:40:35.446967 kubelet[2501]: E1031 00:40:35.446554 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:40:35.446967 kubelet[2501]: E1031 00:40:35.446811 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:35.447307 containerd[1463]: time="2025-10-31T00:40:35.447243292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:40:35.625092 systemd-networkd[1368]: cali3214c0070d8: Gained IPv6LL Oct 31 00:40:35.866479 containerd[1463]: time="2025-10-31T00:40:35.866402235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:35.867824 containerd[1463]: time="2025-10-31T00:40:35.867759386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:40:35.868258 containerd[1463]: time="2025-10-31T00:40:35.867832978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:40:35.868564 kubelet[2501]: E1031 00:40:35.868465 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:40:35.869004 kubelet[2501]: E1031 00:40:35.868563 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:40:35.869004 kubelet[2501]: E1031 00:40:35.868844 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vkzq5_calico-system(87a28487-9bca-4535-a48a-e42ddac97eba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:35.869004 kubelet[2501]: E1031 00:40:35.868890 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:40:35.869493 containerd[1463]: time="2025-10-31T00:40:35.869457738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:40:36.094290 kubelet[2501]: E1031 00:40:36.094243 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:36.094773 kubelet[2501]: E1031 00:40:36.094707 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:40:36.184200 containerd[1463]: time="2025-10-31T00:40:36.184126804Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:36.257583 containerd[1463]: time="2025-10-31T00:40:36.257472437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:40:36.257747 containerd[1463]: time="2025-10-31T00:40:36.257564205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:40:36.257876 kubelet[2501]: E1031 00:40:36.257823 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:40:36.257981 kubelet[2501]: E1031 00:40:36.257877 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:40:36.258085 kubelet[2501]: E1031 00:40:36.258060 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-76599bb565-s49cl_calico-system(acc770d5-5267-4ed9-8f3a-c4a12b51e0b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:36.258149 kubelet[2501]: E1031 00:40:36.258111 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76599bb565-s49cl" podUID="acc770d5-5267-4ed9-8f3a-c4a12b51e0b8" Oct 31 00:40:36.258423 containerd[1463]: time="2025-10-31T00:40:36.258382820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:40:36.652244 containerd[1463]: time="2025-10-31T00:40:36.652155968Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:36.653594 containerd[1463]: time="2025-10-31T00:40:36.653532293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:40:36.653594 containerd[1463]: time="2025-10-31T00:40:36.653573463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:40:36.653851 kubelet[2501]: E1031 00:40:36.653801 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:40:36.653915 kubelet[2501]: E1031 00:40:36.653865 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:40:36.653993 kubelet[2501]: E1031 00:40:36.653961 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:36.654073 kubelet[2501]: E1031 00:40:36.654018 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:37.100003 kubelet[2501]: E1031 00:40:37.099913 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76599bb565-s49cl" podUID="acc770d5-5267-4ed9-8f3a-c4a12b51e0b8" Oct 31 00:40:37.106826 kubelet[2501]: E1031 00:40:37.106732 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:37.828107 containerd[1463]: time="2025-10-31T00:40:37.828052258Z" level=info msg="StopPodSandbox for \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\"" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.869 [WARNING][4989] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0", GenerateName:"calico-kube-controllers-6548595d47-", Namespace:"calico-system", SelfLink:"", UID:"d4810036-8734-4e5d-affc-6c36413b2262", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6548595d47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f", Pod:"calico-kube-controllers-6548595d47-2xk9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b4f7222e81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.869 [INFO][4989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.869 [INFO][4989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" iface="eth0" netns="" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.869 [INFO][4989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.869 [INFO][4989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.893 [INFO][4999] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.893 [INFO][4999] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.893 [INFO][4999] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.899 [WARNING][4999] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.899 [INFO][4999] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.901 [INFO][4999] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:37.906977 containerd[1463]: 2025-10-31 00:40:37.904 [INFO][4989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.907562 containerd[1463]: time="2025-10-31T00:40:37.907043645Z" level=info msg="TearDown network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\" successfully" Oct 31 00:40:37.907562 containerd[1463]: time="2025-10-31T00:40:37.907078913Z" level=info msg="StopPodSandbox for \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\" returns successfully" Oct 31 00:40:37.916014 containerd[1463]: time="2025-10-31T00:40:37.915931312Z" level=info msg="RemovePodSandbox for \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\"" Oct 31 00:40:37.918383 containerd[1463]: time="2025-10-31T00:40:37.918334882Z" level=info msg="Forcibly stopping sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\"" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.955 [WARNING][5016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0", GenerateName:"calico-kube-controllers-6548595d47-", Namespace:"calico-system", SelfLink:"", UID:"d4810036-8734-4e5d-affc-6c36413b2262", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6548595d47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f6b612ce901a79d7cb8d139079c32f556aacefe540c44962249cf6b2005085f", Pod:"calico-kube-controllers-6548595d47-2xk9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b4f7222e81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.955 [INFO][5016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.955 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" iface="eth0" netns="" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.955 [INFO][5016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.955 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.976 [INFO][5026] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.978 [INFO][5026] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.978 [INFO][5026] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.984 [WARNING][5026] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.984 [INFO][5026] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" HandleID="k8s-pod-network.3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Workload="localhost-k8s-calico--kube--controllers--6548595d47--2xk9x-eth0" Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.985 [INFO][5026] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:37.991636 containerd[1463]: 2025-10-31 00:40:37.988 [INFO][5016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05" Oct 31 00:40:37.992135 containerd[1463]: time="2025-10-31T00:40:37.991652580Z" level=info msg="TearDown network for sandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\" successfully" Oct 31 00:40:38.012106 containerd[1463]: time="2025-10-31T00:40:38.012030964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:38.012106 containerd[1463]: time="2025-10-31T00:40:38.012113323Z" level=info msg="RemovePodSandbox \"3dfad682a2a2ecf4cf7191d7c0d762bdea63533bd362206104e1df08f5036b05\" returns successfully" Oct 31 00:40:38.014094 containerd[1463]: time="2025-10-31T00:40:38.014063453Z" level=info msg="StopPodSandbox for \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\"" Oct 31 00:40:38.034735 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:41228.service - OpenSSH per-connection server daemon (10.0.0.1:41228). Oct 31 00:40:38.084882 sshd[5054]: Accepted publickey for core from 10.0.0.1 port 41228 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:38.087021 sshd[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:38.092433 systemd-logind[1448]: New session 9 of user core. Oct 31 00:40:38.100785 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.055 [WARNING][5047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w2k7k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc3f2d9-311f-49d7-b160-402ffa40a7c3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac", Pod:"coredns-66bc5c9577-w2k7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib73bfa0c7af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.055 [INFO][5047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.055 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" iface="eth0" netns="" Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.055 [INFO][5047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.055 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.077 [INFO][5059] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.078 [INFO][5059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.111 [INFO][5059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.117 [WARNING][5059] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.117 [INFO][5059] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.118 [INFO][5059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:38.124503 containerd[1463]: 2025-10-31 00:40:38.121 [INFO][5047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.125304 containerd[1463]: time="2025-10-31T00:40:38.124539886Z" level=info msg="TearDown network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\" successfully" Oct 31 00:40:38.125304 containerd[1463]: time="2025-10-31T00:40:38.124578270Z" level=info msg="StopPodSandbox for \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\" returns successfully" Oct 31 00:40:38.125304 containerd[1463]: time="2025-10-31T00:40:38.125242765Z" level=info msg="RemovePodSandbox for \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\"" Oct 31 00:40:38.125304 containerd[1463]: time="2025-10-31T00:40:38.125283904Z" level=info msg="Forcibly stopping sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\"" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.165 [WARNING][5080] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w2k7k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc3f2d9-311f-49d7-b160-402ffa40a7c3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08e34669c9dfe97c7cd3a004a2f2931b010bfd96b382b572ee000517fd9d33ac", Pod:"coredns-66bc5c9577-w2k7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib73bfa0c7af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.165 [INFO][5080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.165 [INFO][5080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" iface="eth0" netns="" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.165 [INFO][5080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.165 [INFO][5080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.187 [INFO][5093] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.188 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.188 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.195 [WARNING][5093] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.195 [INFO][5093] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" HandleID="k8s-pod-network.b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Workload="localhost-k8s-coredns--66bc5c9577--w2k7k-eth0" Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.197 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:38.204254 containerd[1463]: 2025-10-31 00:40:38.200 [INFO][5080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a" Oct 31 00:40:38.204852 containerd[1463]: time="2025-10-31T00:40:38.204418781Z" level=info msg="TearDown network for sandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\" successfully" Oct 31 00:40:38.274363 containerd[1463]: time="2025-10-31T00:40:38.274279649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:38.274363 containerd[1463]: time="2025-10-31T00:40:38.274371426Z" level=info msg="RemovePodSandbox \"b3b85a0597e304ca3793417335e328c6a6a9595212da4069491c0c8b350b7d5a\" returns successfully" Oct 31 00:40:38.275008 containerd[1463]: time="2025-10-31T00:40:38.274981967Z" level=info msg="StopPodSandbox for \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\"" Oct 31 00:40:38.338261 sshd[5054]: pam_unix(sshd:session): session closed for user core Oct 31 00:40:38.344098 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:41228.service: Deactivated successfully. Oct 31 00:40:38.346228 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:40:38.348731 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:40:38.350682 systemd-logind[1448]: Removed session 9. Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.316 [WARNING][5116] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" WorkloadEndpoint="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.316 [INFO][5116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.316 [INFO][5116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" iface="eth0" netns="" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.316 [INFO][5116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.316 [INFO][5116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.347 [INFO][5125] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.347 [INFO][5125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.347 [INFO][5125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.354 [WARNING][5125] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.354 [INFO][5125] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.357 [INFO][5125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:38.363340 containerd[1463]: 2025-10-31 00:40:38.360 [INFO][5116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.363780 containerd[1463]: time="2025-10-31T00:40:38.363418318Z" level=info msg="TearDown network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\" successfully" Oct 31 00:40:38.363780 containerd[1463]: time="2025-10-31T00:40:38.363501990Z" level=info msg="StopPodSandbox for \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\" returns successfully" Oct 31 00:40:38.364240 containerd[1463]: time="2025-10-31T00:40:38.364183357Z" level=info msg="RemovePodSandbox for \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\"" Oct 31 00:40:38.364240 containerd[1463]: time="2025-10-31T00:40:38.364234335Z" level=info msg="Forcibly stopping sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\"" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.575 [WARNING][5144] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" WorkloadEndpoint="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.575 [INFO][5144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.575 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" iface="eth0" netns="" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.575 [INFO][5144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.575 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.600 [INFO][5152] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.600 [INFO][5152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.600 [INFO][5152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.606 [WARNING][5152] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.607 [INFO][5152] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" HandleID="k8s-pod-network.3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Workload="localhost-k8s-whisker--6c688b7869--4llw5-eth0" Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.608 [INFO][5152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:38.614988 containerd[1463]: 2025-10-31 00:40:38.611 [INFO][5144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2" Oct 31 00:40:38.614988 containerd[1463]: time="2025-10-31T00:40:38.614938871Z" level=info msg="TearDown network for sandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\" successfully" Oct 31 00:40:38.947850 containerd[1463]: time="2025-10-31T00:40:38.947512952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:38.947850 containerd[1463]: time="2025-10-31T00:40:38.947629667Z" level=info msg="RemovePodSandbox \"3f70dece5a0517f8016f53f9735ac4c995572afebc0d651fe97d2bfdedbaeef2\" returns successfully" Oct 31 00:40:38.948445 containerd[1463]: time="2025-10-31T00:40:38.948400077Z" level=info msg="StopPodSandbox for \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\"" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:38.992 [WARNING][5173] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb2918cc-8a31-4686-bd11-d009c753fde6", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5", Pod:"calico-apiserver-796b6cb4bb-6pz6b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia41d1875609", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:38.992 [INFO][5173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:38.992 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" iface="eth0" netns="" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:38.992 [INFO][5173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:38.992 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:39.015 [INFO][5182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:39.015 [INFO][5182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:39.015 [INFO][5182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:39.022 [WARNING][5182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:39.022 [INFO][5182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:39.024 [INFO][5182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:39.030481 containerd[1463]: 2025-10-31 00:40:39.027 [INFO][5173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.031008 containerd[1463]: time="2025-10-31T00:40:39.030529137Z" level=info msg="TearDown network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\" successfully" Oct 31 00:40:39.031008 containerd[1463]: time="2025-10-31T00:40:39.030566519Z" level=info msg="StopPodSandbox for \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\" returns successfully" Oct 31 00:40:39.031228 containerd[1463]: time="2025-10-31T00:40:39.031186136Z" level=info msg="RemovePodSandbox for \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\"" Oct 31 00:40:39.031228 containerd[1463]: time="2025-10-31T00:40:39.031222216Z" level=info msg="Forcibly stopping sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\"" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.274 [WARNING][5199] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb2918cc-8a31-4686-bd11-d009c753fde6", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fbbd16d98e299f5fab0270a9230a074a1756d74999f3b3bc275c166df9a9fe5", Pod:"calico-apiserver-796b6cb4bb-6pz6b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia41d1875609", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.274 [INFO][5199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.274 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" iface="eth0" netns="" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.274 [INFO][5199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.274 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.298 [INFO][5207] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.298 [INFO][5207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.298 [INFO][5207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.304 [WARNING][5207] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.304 [INFO][5207] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" HandleID="k8s-pod-network.1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--6pz6b-eth0" Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.306 [INFO][5207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:39.311592 containerd[1463]: 2025-10-31 00:40:39.308 [INFO][5199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec" Oct 31 00:40:39.312103 containerd[1463]: time="2025-10-31T00:40:39.311671407Z" level=info msg="TearDown network for sandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\" successfully" Oct 31 00:40:39.491406 containerd[1463]: time="2025-10-31T00:40:39.491342972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:39.491627 containerd[1463]: time="2025-10-31T00:40:39.491429378Z" level=info msg="RemovePodSandbox \"1afeb9f90ccb9388d587ec19727b5e9ba1ef36e8275a0c5ae2842f572fcc39ec\" returns successfully" Oct 31 00:40:39.492025 containerd[1463]: time="2025-10-31T00:40:39.491995882Z" level=info msg="StopPodSandbox for \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\"" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.528 [WARNING][5225] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--vkzq5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87a28487-9bca-4535-a48a-e42ddac97eba", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71", Pod:"goldmane-7c778bb748-vkzq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3214c0070d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.528 [INFO][5225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.528 [INFO][5225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" iface="eth0" netns="" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.528 [INFO][5225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.528 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.551 [INFO][5234] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.552 [INFO][5234] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.552 [INFO][5234] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.560 [WARNING][5234] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.560 [INFO][5234] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.562 [INFO][5234] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:39.568124 containerd[1463]: 2025-10-31 00:40:39.564 [INFO][5225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.568124 containerd[1463]: time="2025-10-31T00:40:39.568089864Z" level=info msg="TearDown network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\" successfully" Oct 31 00:40:39.568124 containerd[1463]: time="2025-10-31T00:40:39.568123078Z" level=info msg="StopPodSandbox for \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\" returns successfully" Oct 31 00:40:39.568820 containerd[1463]: time="2025-10-31T00:40:39.568794224Z" level=info msg="RemovePodSandbox for \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\"" Oct 31 00:40:39.568859 containerd[1463]: time="2025-10-31T00:40:39.568833029Z" level=info msg="Forcibly stopping sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\"" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.668 [WARNING][5253] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--vkzq5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87a28487-9bca-4535-a48a-e42ddac97eba", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"074a6789d890b6808a23fccc2544cb128df0ac6bf41507f0302c869027d55e71", Pod:"goldmane-7c778bb748-vkzq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3214c0070d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.668 [INFO][5253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.668 [INFO][5253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" iface="eth0" netns="" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.668 [INFO][5253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.668 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.692 [INFO][5262] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.692 [INFO][5262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.692 [INFO][5262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.750 [WARNING][5262] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.750 [INFO][5262] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" HandleID="k8s-pod-network.16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Workload="localhost-k8s-goldmane--7c778bb748--vkzq5-eth0" Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.775 [INFO][5262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:39.782560 containerd[1463]: 2025-10-31 00:40:39.779 [INFO][5253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9" Oct 31 00:40:39.783077 containerd[1463]: time="2025-10-31T00:40:39.782632013Z" level=info msg="TearDown network for sandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\" successfully" Oct 31 00:40:39.862970 containerd[1463]: time="2025-10-31T00:40:39.862770767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:39.862970 containerd[1463]: time="2025-10-31T00:40:39.862892983Z" level=info msg="RemovePodSandbox \"16275f209463286907df762874ff30bf16d142b51281319d18385400334a05b9\" returns successfully" Oct 31 00:40:39.863812 containerd[1463]: time="2025-10-31T00:40:39.863779295Z" level=info msg="StopPodSandbox for \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\"" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.909 [WARNING][5280] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9754x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"01815706-9b05-4375-91b1-4cc444b8c451", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662", Pod:"coredns-66bc5c9577-9754x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic58522cdbd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.909 [INFO][5280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.910 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" iface="eth0" netns="" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.910 [INFO][5280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.910 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.941 [INFO][5289] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.941 [INFO][5289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.941 [INFO][5289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.950 [WARNING][5289] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.950 [INFO][5289] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.952 [INFO][5289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:39.962069 containerd[1463]: 2025-10-31 00:40:39.957 [INFO][5280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:39.963230 containerd[1463]: time="2025-10-31T00:40:39.963158449Z" level=info msg="TearDown network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\" successfully" Oct 31 00:40:39.963230 containerd[1463]: time="2025-10-31T00:40:39.963199930Z" level=info msg="StopPodSandbox for \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\" returns successfully" Oct 31 00:40:39.964722 containerd[1463]: time="2025-10-31T00:40:39.964189672Z" level=info msg="RemovePodSandbox for \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\"" Oct 31 00:40:39.964722 containerd[1463]: time="2025-10-31T00:40:39.964295056Z" level=info msg="Forcibly stopping sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\"" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.028 [WARNING][5308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9754x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"01815706-9b05-4375-91b1-4cc444b8c451", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77bb2fc039bb6df227c207d9020df765451a00877ed8b431b7f07048e6de8662", Pod:"coredns-66bc5c9577-9754x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic58522cdbd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.030 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.030 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" iface="eth0" netns="" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.030 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.030 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.063 [INFO][5317] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.063 [INFO][5317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.063 [INFO][5317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.070 [WARNING][5317] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.070 [INFO][5317] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" HandleID="k8s-pod-network.f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Workload="localhost-k8s-coredns--66bc5c9577--9754x-eth0" Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.193 [INFO][5317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:40.201314 containerd[1463]: 2025-10-31 00:40:40.197 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87" Oct 31 00:40:40.201314 containerd[1463]: time="2025-10-31T00:40:40.201288266Z" level=info msg="TearDown network for sandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\" successfully" Oct 31 00:40:40.212930 containerd[1463]: time="2025-10-31T00:40:40.212872157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:40.212930 containerd[1463]: time="2025-10-31T00:40:40.212942121Z" level=info msg="RemovePodSandbox \"f4d53f994361a7b9f8164a806be6bf7ee7f0c6e803f1811cefc7f0e2785bbf87\" returns successfully" Oct 31 00:40:40.214677 containerd[1463]: time="2025-10-31T00:40:40.214242131Z" level=info msg="StopPodSandbox for \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\"" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.299 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0234a26-22e7-4dab-acf3-a0c995470142", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3", Pod:"calico-apiserver-796b6cb4bb-vthwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba3e4f64233", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.299 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.299 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" iface="eth0" netns="" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.299 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.299 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.325 [INFO][5345] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.326 [INFO][5345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.326 [INFO][5345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.333 [WARNING][5345] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.333 [INFO][5345] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.336 [INFO][5345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:40.341869 containerd[1463]: 2025-10-31 00:40:40.339 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.342552 containerd[1463]: time="2025-10-31T00:40:40.342489231Z" level=info msg="TearDown network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\" successfully" Oct 31 00:40:40.342552 containerd[1463]: time="2025-10-31T00:40:40.342524869Z" level=info msg="StopPodSandbox for \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\" returns successfully" Oct 31 00:40:40.343249 containerd[1463]: time="2025-10-31T00:40:40.343217206Z" level=info msg="RemovePodSandbox for \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\"" Oct 31 00:40:40.343310 containerd[1463]: time="2025-10-31T00:40:40.343255299Z" level=info msg="Forcibly stopping sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\"" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.424 [WARNING][5364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0", GenerateName:"calico-apiserver-796b6cb4bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0234a26-22e7-4dab-acf3-a0c995470142", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796b6cb4bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"986dafa89063756b635a03be77b71eb3c64b78b974ba629df245b6b83427b7a3", Pod:"calico-apiserver-796b6cb4bb-vthwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba3e4f64233", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.425 [INFO][5364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.425 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" iface="eth0" netns="" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.425 [INFO][5364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.425 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.456 [INFO][5372] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.456 [INFO][5372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.457 [INFO][5372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.463 [WARNING][5372] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.463 [INFO][5372] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" HandleID="k8s-pod-network.d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Workload="localhost-k8s-calico--apiserver--796b6cb4bb--vthwl-eth0" Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.465 [INFO][5372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:40.474634 containerd[1463]: 2025-10-31 00:40:40.469 [INFO][5364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd" Oct 31 00:40:40.474634 containerd[1463]: time="2025-10-31T00:40:40.472205316Z" level=info msg="TearDown network for sandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\" successfully" Oct 31 00:40:40.569583 containerd[1463]: time="2025-10-31T00:40:40.569493604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:40.569583 containerd[1463]: time="2025-10-31T00:40:40.569590412Z" level=info msg="RemovePodSandbox \"d7c350554d3ca1b2bf9e87e694cef8afe77a20f57f94cfbff5e0b341de596fcd\" returns successfully" Oct 31 00:40:40.570328 containerd[1463]: time="2025-10-31T00:40:40.570149772Z" level=info msg="StopPodSandbox for \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\"" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.658 [WARNING][5390] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6gj62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8404757-a167-4c06-a272-e0eda36ae575", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8", Pod:"csi-node-driver-6gj62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2abb0773e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.659 [INFO][5390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.659 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" iface="eth0" netns="" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.659 [INFO][5390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.659 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.682 [INFO][5398] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.683 [INFO][5398] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.683 [INFO][5398] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.690 [WARNING][5398] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.691 [INFO][5398] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.693 [INFO][5398] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:40.699667 containerd[1463]: 2025-10-31 00:40:40.696 [INFO][5390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.700324 containerd[1463]: time="2025-10-31T00:40:40.699747809Z" level=info msg="TearDown network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\" successfully" Oct 31 00:40:40.700324 containerd[1463]: time="2025-10-31T00:40:40.699795070Z" level=info msg="StopPodSandbox for \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\" returns successfully" Oct 31 00:40:40.700578 containerd[1463]: time="2025-10-31T00:40:40.700553424Z" level=info msg="RemovePodSandbox for \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\"" Oct 31 00:40:40.700658 containerd[1463]: time="2025-10-31T00:40:40.700584073Z" level=info msg="Forcibly stopping sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\"" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.746 [WARNING][5415] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6gj62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8404757-a167-4c06-a272-e0eda36ae575", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c707f54f7dddc7e3c5d2e84bed7a3f9f63c22733535b7ef0fe514680a904efe8", Pod:"csi-node-driver-6gj62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2abb0773e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.747 [INFO][5415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.747 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" iface="eth0" netns="" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.747 [INFO][5415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.747 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.774 [INFO][5424] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.774 [INFO][5424] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.774 [INFO][5424] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.782 [WARNING][5424] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.782 [INFO][5424] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" HandleID="k8s-pod-network.fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Workload="localhost-k8s-csi--node--driver--6gj62-eth0" Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.784 [INFO][5424] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:40:40.791762 containerd[1463]: 2025-10-31 00:40:40.787 [INFO][5415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029" Oct 31 00:40:40.791762 containerd[1463]: time="2025-10-31T00:40:40.791515501Z" level=info msg="TearDown network for sandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\" successfully" Oct 31 00:40:41.016216 containerd[1463]: time="2025-10-31T00:40:41.016009297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:40:41.016216 containerd[1463]: time="2025-10-31T00:40:41.016094251Z" level=info msg="RemovePodSandbox \"fe0ae7fd1412cc4577146b4821091d8e029dc2a70e4be0ad71587de3c17a5029\" returns successfully" Oct 31 00:40:42.038303 kubelet[2501]: E1031 00:40:42.038242 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:43.364050 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:42592.service - OpenSSH per-connection server daemon (10.0.0.1:42592). Oct 31 00:40:43.406575 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 42592 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:43.408756 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:43.414033 systemd-logind[1448]: New session 10 of user core. Oct 31 00:40:43.423942 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 00:40:43.543267 sshd[5444]: pam_unix(sshd:session): session closed for user core Oct 31 00:40:43.547648 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:42592.service: Deactivated successfully. Oct 31 00:40:43.549834 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:40:43.550574 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:40:43.551555 systemd-logind[1448]: Removed session 10. Oct 31 00:40:45.844471 containerd[1463]: time="2025-10-31T00:40:45.843939106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:40:46.395396 containerd[1463]: time="2025-10-31T00:40:46.395317606Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:46.406506 containerd[1463]: time="2025-10-31T00:40:46.406394069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:40:46.406752 containerd[1463]: time="2025-10-31T00:40:46.406527886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:40:46.406808 kubelet[2501]: E1031 00:40:46.406759 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:46.407198 kubelet[2501]: E1031 00:40:46.406822 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:46.407198 kubelet[2501]: E1031 00:40:46.406926 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-796b6cb4bb-6pz6b_calico-apiserver(bb2918cc-8a31-4686-bd11-d009c753fde6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:46.407198 kubelet[2501]: E1031 00:40:46.406961 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:40:47.844472 containerd[1463]: time="2025-10-31T00:40:47.844188928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:40:48.210889 containerd[1463]: time="2025-10-31T00:40:48.210673563Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:48.212798 containerd[1463]: time="2025-10-31T00:40:48.212746021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:40:48.212863 containerd[1463]: time="2025-10-31T00:40:48.212782842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:40:48.213280 kubelet[2501]: E1031 00:40:48.213215 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:48.213280 kubelet[2501]: E1031 00:40:48.213280 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:40:48.213862 kubelet[2501]: E1031 00:40:48.213407 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-796b6cb4bb-vthwl_calico-apiserver(e0234a26-22e7-4dab-acf3-a0c995470142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:48.213862 kubelet[2501]: E1031 00:40:48.213463 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:40:48.555467 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:42608.service - OpenSSH per-connection server daemon (10.0.0.1:42608). Oct 31 00:40:48.595819 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 42608 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:48.597773 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:48.602397 systemd-logind[1448]: New session 11 of user core. Oct 31 00:40:48.607774 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 00:40:48.739092 sshd[5461]: pam_unix(sshd:session): session closed for user core Oct 31 00:40:48.749783 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:42608.service: Deactivated successfully. Oct 31 00:40:48.752052 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:40:48.754487 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:40:48.762998 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). Oct 31 00:40:48.763970 systemd-logind[1448]: Removed session 11. Oct 31 00:40:48.840976 sshd[5476]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:48.842970 sshd[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:48.843421 containerd[1463]: time="2025-10-31T00:40:48.843385591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:40:48.887909 systemd-logind[1448]: New session 12 of user core. Oct 31 00:40:48.903949 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 00:40:49.226946 sshd[5476]: pam_unix(sshd:session): session closed for user core Oct 31 00:40:49.239008 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:42610.service: Deactivated successfully. Oct 31 00:40:49.241225 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:40:49.243135 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:40:49.253135 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:42618.service - OpenSSH per-connection server daemon (10.0.0.1:42618). Oct 31 00:40:49.254319 systemd-logind[1448]: Removed session 12. Oct 31 00:40:49.293836 sshd[5489]: Accepted publickey for core from 10.0.0.1 port 42618 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:49.295848 sshd[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:49.300686 systemd-logind[1448]: New session 13 of user core. Oct 31 00:40:49.308780 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 00:40:49.413744 containerd[1463]: time="2025-10-31T00:40:49.413673216Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:49.458777 containerd[1463]: time="2025-10-31T00:40:49.458309754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:40:49.458777 containerd[1463]: time="2025-10-31T00:40:49.458404927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:40:49.458982 kubelet[2501]: E1031 00:40:49.458697 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:40:49.458982 kubelet[2501]: E1031 00:40:49.458753 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:40:49.458982 kubelet[2501]: E1031 00:40:49.458848 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6548595d47-2xk9x_calico-system(d4810036-8734-4e5d-affc-6c36413b2262): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:49.458982 kubelet[2501]: E1031 00:40:49.458885 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:40:49.501045 sshd[5489]: pam_unix(sshd:session): session closed for user core Oct 31 00:40:49.505511 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:42618.service: Deactivated successfully. Oct 31 00:40:49.507862 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:40:49.508808 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:40:49.509890 systemd-logind[1448]: Removed session 13. Oct 31 00:40:49.844219 kubelet[2501]: E1031 00:40:49.843732 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:49.845253 containerd[1463]: time="2025-10-31T00:40:49.845197313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:40:50.201138 containerd[1463]: time="2025-10-31T00:40:50.200966334Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:50.202367 containerd[1463]: time="2025-10-31T00:40:50.202314911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:40:50.202521 containerd[1463]: time="2025-10-31T00:40:50.202403180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:40:50.202695 kubelet[2501]: E1031 00:40:50.202619 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:40:50.202774 kubelet[2501]: E1031 00:40:50.202692 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:40:50.202951 kubelet[2501]: E1031 00:40:50.202916 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-76599bb565-s49cl_calico-system(acc770d5-5267-4ed9-8f3a-c4a12b51e0b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:50.203197 containerd[1463]: time="2025-10-31T00:40:50.203160031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:40:50.530764 containerd[1463]: time="2025-10-31T00:40:50.530700158Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:50.531879 containerd[1463]: time="2025-10-31T00:40:50.531843751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:40:50.531976 containerd[1463]: time="2025-10-31T00:40:50.531875833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:40:50.532208 kubelet[2501]: E1031 00:40:50.532145 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:40:50.532469 kubelet[2501]: E1031 00:40:50.532220 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:40:50.532469 kubelet[2501]: E1031 00:40:50.532445 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:50.533034 containerd[1463]: time="2025-10-31T00:40:50.532743937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:40:50.842696 containerd[1463]: time="2025-10-31T00:40:50.842201366Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:50.844161 containerd[1463]: time="2025-10-31T00:40:50.844058558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:40:50.844161 containerd[1463]: time="2025-10-31T00:40:50.844121970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:40:50.844937 kubelet[2501]: E1031 00:40:50.844437 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:40:50.844937 kubelet[2501]: E1031 00:40:50.844524 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:40:50.845060 kubelet[2501]: E1031 00:40:50.844999 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-76599bb565-s49cl_calico-system(acc770d5-5267-4ed9-8f3a-c4a12b51e0b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:50.845098 containerd[1463]: time="2025-10-31T00:40:50.844997549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:40:50.845141 kubelet[2501]: E1031 00:40:50.845090 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76599bb565-s49cl" podUID="acc770d5-5267-4ed9-8f3a-c4a12b51e0b8" Oct 31 00:40:51.322741 containerd[1463]: time="2025-10-31T00:40:51.322653054Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:51.324696 containerd[1463]: time="2025-10-31T00:40:51.324597992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:40:51.324793 containerd[1463]: time="2025-10-31T00:40:51.324661844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:40:51.324973 kubelet[2501]: E1031 00:40:51.324922 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:40:51.325027 kubelet[2501]: E1031 00:40:51.324976 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:40:51.325258 kubelet[2501]: E1031 00:40:51.325214 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:51.325380 kubelet[2501]: E1031 00:40:51.325285 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:40:51.325500 containerd[1463]: time="2025-10-31T00:40:51.325296100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:40:51.711287 containerd[1463]: time="2025-10-31T00:40:51.711118898Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:40:51.842574 kubelet[2501]: E1031 00:40:51.842482 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:51.879186 containerd[1463]: time="2025-10-31T00:40:51.879086424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:40:51.879334 containerd[1463]: time="2025-10-31T00:40:51.879171877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:40:51.879592 kubelet[2501]: E1031 00:40:51.879531 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:40:51.879804 kubelet[2501]: E1031 00:40:51.879594 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:40:51.879804 kubelet[2501]: E1031 00:40:51.879729 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vkzq5_calico-system(87a28487-9bca-4535-a48a-e42ddac97eba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:40:51.879804 kubelet[2501]: E1031 00:40:51.879770 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:40:54.515398 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:33368.service - OpenSSH per-connection server daemon (10.0.0.1:33368). Oct 31 00:40:54.562541 sshd[5511]: Accepted publickey for core from 10.0.0.1 port 33368 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:54.565205 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:54.572058 systemd-logind[1448]: New session 14 of user core. Oct 31 00:40:54.577897 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 00:40:54.766278 sshd[5511]: pam_unix(sshd:session): session closed for user core Oct 31 00:40:54.771564 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:33368.service: Deactivated successfully. Oct 31 00:40:54.774081 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:40:54.774834 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:40:54.776090 systemd-logind[1448]: Removed session 14. Oct 31 00:40:56.842694 kubelet[2501]: E1031 00:40:56.842630 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:57.848195 kubelet[2501]: E1031 00:40:57.848150 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:40:59.784050 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:33374.service - OpenSSH per-connection server daemon (10.0.0.1:33374). Oct 31 00:40:59.844034 kubelet[2501]: E1031 00:40:59.843936 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:40:59.846754 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 33374 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:40:59.849336 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:40:59.855397 systemd-logind[1448]: New session 15 of user core. Oct 31 00:40:59.859994 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 00:41:00.023733 sshd[5553]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:00.028679 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:33374.service: Deactivated successfully. Oct 31 00:41:00.030883 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:41:00.031589 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:41:00.032586 systemd-logind[1448]: Removed session 15. Oct 31 00:41:00.843685 kubelet[2501]: E1031 00:41:00.843503 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:41:01.844197 kubelet[2501]: E1031 00:41:01.844126 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76599bb565-s49cl" podUID="acc770d5-5267-4ed9-8f3a-c4a12b51e0b8" Oct 31 00:41:01.844197 kubelet[2501]: E1031 00:41:01.844173 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:41:02.843584 kubelet[2501]: E1031 00:41:02.843490 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:41:03.844756 kubelet[2501]: E1031 00:41:03.844266 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:41:05.041560 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:36798.service - OpenSSH per-connection server daemon (10.0.0.1:36798). Oct 31 00:41:05.081811 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 36798 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:05.084931 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:05.089670 systemd-logind[1448]: New session 16 of user core. Oct 31 00:41:05.103820 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 00:41:05.245264 sshd[5569]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:05.249893 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:36798.service: Deactivated successfully. Oct 31 00:41:05.252716 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:41:05.254938 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:41:05.256351 systemd-logind[1448]: Removed session 16. Oct 31 00:41:10.259436 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:46866.service - OpenSSH per-connection server daemon (10.0.0.1:46866). Oct 31 00:41:10.306965 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 46866 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:10.309315 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:10.314503 systemd-logind[1448]: New session 17 of user core. Oct 31 00:41:10.325765 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 00:41:10.451100 sshd[5583]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:10.460059 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:46866.service: Deactivated successfully. Oct 31 00:41:10.462340 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:41:10.464178 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:41:10.470658 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:46870.service - OpenSSH per-connection server daemon (10.0.0.1:46870). Oct 31 00:41:10.471848 systemd-logind[1448]: Removed session 17. Oct 31 00:41:10.506642 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 46870 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:10.508812 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:10.513957 systemd-logind[1448]: New session 18 of user core. Oct 31 00:41:10.523783 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 00:41:11.153018 sshd[5597]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:11.162056 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:46870.service: Deactivated successfully. Oct 31 00:41:11.164418 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:41:11.166322 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:41:11.172928 systemd[1]: Started sshd@18-10.0.0.63:22-10.0.0.1:46880.service - OpenSSH per-connection server daemon (10.0.0.1:46880). Oct 31 00:41:11.174073 systemd-logind[1448]: Removed session 18. Oct 31 00:41:11.214542 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 46880 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:11.216308 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:11.220976 systemd-logind[1448]: New session 19 of user core. Oct 31 00:41:11.225782 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 00:41:11.845553 containerd[1463]: time="2025-10-31T00:41:11.845457921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:41:12.355926 containerd[1463]: time="2025-10-31T00:41:12.355834752Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:12.411994 containerd[1463]: time="2025-10-31T00:41:12.411901209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:41:12.412201 containerd[1463]: time="2025-10-31T00:41:12.411955493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:41:12.412329 kubelet[2501]: E1031 00:41:12.412271 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:41:12.412796 kubelet[2501]: E1031 00:41:12.412338 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:41:12.412796 kubelet[2501]: E1031 00:41:12.412452 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-796b6cb4bb-vthwl_calico-apiserver(e0234a26-22e7-4dab-acf3-a0c995470142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:12.412796 kubelet[2501]: E1031 00:41:12.412501 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:41:12.870600 sshd[5610]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:12.882817 systemd[1]: sshd@18-10.0.0.63:22-10.0.0.1:46880.service: Deactivated successfully. Oct 31 00:41:12.885366 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:41:12.887509 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:41:12.896239 systemd[1]: Started sshd@19-10.0.0.63:22-10.0.0.1:46884.service - OpenSSH per-connection server daemon (10.0.0.1:46884). Oct 31 00:41:12.898321 systemd-logind[1448]: Removed session 19. Oct 31 00:41:12.943794 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 46884 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:12.946667 sshd[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:12.955187 systemd-logind[1448]: New session 20 of user core. Oct 31 00:41:12.972015 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:41:13.458099 sshd[5641]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:13.471909 systemd[1]: sshd@19-10.0.0.63:22-10.0.0.1:46884.service: Deactivated successfully. Oct 31 00:41:13.474410 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:41:13.475255 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:41:13.485590 systemd[1]: Started sshd@20-10.0.0.63:22-10.0.0.1:46894.service - OpenSSH per-connection server daemon (10.0.0.1:46894). Oct 31 00:41:13.486768 systemd-logind[1448]: Removed session 20. Oct 31 00:41:13.521459 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 46894 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:13.523213 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:13.527740 systemd-logind[1448]: New session 21 of user core. Oct 31 00:41:13.534738 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:41:13.693935 sshd[5655]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:13.698703 systemd[1]: sshd@20-10.0.0.63:22-10.0.0.1:46894.service: Deactivated successfully. Oct 31 00:41:13.701790 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:41:13.702497 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:41:13.703580 systemd-logind[1448]: Removed session 21. Oct 31 00:41:13.845320 containerd[1463]: time="2025-10-31T00:41:13.844663334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:41:14.173508 containerd[1463]: time="2025-10-31T00:41:14.173294580Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:14.175027 containerd[1463]: time="2025-10-31T00:41:14.174971236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:41:14.175094 containerd[1463]: time="2025-10-31T00:41:14.175052330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:41:14.175389 kubelet[2501]: E1031 00:41:14.175312 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:41:14.175915 kubelet[2501]: E1031 00:41:14.175393 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:41:14.175915 kubelet[2501]: E1031 00:41:14.175667 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-796b6cb4bb-6pz6b_calico-apiserver(bb2918cc-8a31-4686-bd11-d009c753fde6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:14.175915 kubelet[2501]: E1031 00:41:14.175727 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:41:14.176102 containerd[1463]: time="2025-10-31T00:41:14.175894240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:41:14.488248 containerd[1463]: time="2025-10-31T00:41:14.488195566Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:14.544174 containerd[1463]: time="2025-10-31T00:41:14.544057836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:41:14.544174 containerd[1463]: time="2025-10-31T00:41:14.544126056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:41:14.544564 kubelet[2501]: E1031 00:41:14.544491 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:41:14.544652 kubelet[2501]: E1031 00:41:14.544570 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:41:14.544740 kubelet[2501]: E1031 00:41:14.544694 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-76599bb565-s49cl_calico-system(acc770d5-5267-4ed9-8f3a-c4a12b51e0b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:14.545756 containerd[1463]: time="2025-10-31T00:41:14.545720725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:41:14.950118 containerd[1463]: time="2025-10-31T00:41:14.949933585Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:14.951213 containerd[1463]: time="2025-10-31T00:41:14.951162871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:41:14.951303 containerd[1463]: time="2025-10-31T00:41:14.951199931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:41:14.951509 kubelet[2501]: E1031 00:41:14.951458 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:41:14.951577 kubelet[2501]: E1031 00:41:14.951525 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:41:14.951774 kubelet[2501]: E1031 00:41:14.951734 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-76599bb565-s49cl_calico-system(acc770d5-5267-4ed9-8f3a-c4a12b51e0b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:14.952214 kubelet[2501]: E1031 00:41:14.951809 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76599bb565-s49cl" podUID="acc770d5-5267-4ed9-8f3a-c4a12b51e0b8" Oct 31 00:41:14.952312 containerd[1463]: time="2025-10-31T00:41:14.952011642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:41:15.270473 containerd[1463]: time="2025-10-31T00:41:15.270398136Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:15.337748 containerd[1463]: time="2025-10-31T00:41:15.337690437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:41:15.337877 containerd[1463]: time="2025-10-31T00:41:15.337751273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:41:15.338031 kubelet[2501]: E1031 00:41:15.337973 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:41:15.338465 kubelet[2501]: E1031 00:41:15.338029 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:41:15.338465 kubelet[2501]: E1031 00:41:15.338198 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vkzq5_calico-system(87a28487-9bca-4535-a48a-e42ddac97eba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:15.338465 kubelet[2501]: E1031 00:41:15.338262 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:41:15.338598 containerd[1463]: time="2025-10-31T00:41:15.338498291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:41:15.669745 containerd[1463]: time="2025-10-31T00:41:15.669571827Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:15.671054 containerd[1463]: time="2025-10-31T00:41:15.670976253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:41:15.671054 containerd[1463]: time="2025-10-31T00:41:15.671015578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:41:15.671257 kubelet[2501]: E1031 00:41:15.671210 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:41:15.671337 kubelet[2501]: E1031 00:41:15.671266 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:41:15.671392 kubelet[2501]: E1031 00:41:15.671357 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6548595d47-2xk9x_calico-system(d4810036-8734-4e5d-affc-6c36413b2262): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:15.671456 kubelet[2501]: E1031 00:41:15.671393 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:41:17.843948 containerd[1463]: time="2025-10-31T00:41:17.843834054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:41:18.364777 containerd[1463]: time="2025-10-31T00:41:18.364722481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:18.397715 containerd[1463]: time="2025-10-31T00:41:18.397628911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:41:18.397860 containerd[1463]: time="2025-10-31T00:41:18.397663176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:41:18.397981 kubelet[2501]: E1031 00:41:18.397928 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:41:18.398410 kubelet[2501]: E1031 00:41:18.397992 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:41:18.398410 kubelet[2501]: E1031 00:41:18.398098 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:18.398969 containerd[1463]: time="2025-10-31T00:41:18.398943436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:41:18.710846 systemd[1]: Started sshd@21-10.0.0.63:22-10.0.0.1:46898.service - OpenSSH per-connection server daemon (10.0.0.1:46898). Oct 31 00:41:18.751505 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 46898 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:18.753569 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:18.759442 systemd-logind[1448]: New session 22 of user core. Oct 31 00:41:18.768861 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:41:18.820754 containerd[1463]: time="2025-10-31T00:41:18.820679209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:41:18.822767 containerd[1463]: time="2025-10-31T00:41:18.822530082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:41:18.822767 containerd[1463]: time="2025-10-31T00:41:18.822660360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:41:18.822986 kubelet[2501]: E1031 00:41:18.822847 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:41:18.822986 kubelet[2501]: E1031 00:41:18.822915 2501 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:41:18.823093 kubelet[2501]: E1031 00:41:18.823024 2501 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6gj62_calico-system(b8404757-a167-4c06-a272-e0eda36ae575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:41:18.823182 kubelet[2501]: E1031 00:41:18.823088 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:41:18.933842 sshd[5674]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:18.939246 systemd[1]: sshd@21-10.0.0.63:22-10.0.0.1:46898.service: Deactivated successfully. Oct 31 00:41:18.941502 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:41:18.942309 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:41:18.943412 systemd-logind[1448]: Removed session 22. Oct 31 00:41:23.946778 systemd[1]: Started sshd@22-10.0.0.63:22-10.0.0.1:34914.service - OpenSSH per-connection server daemon (10.0.0.1:34914). Oct 31 00:41:23.986885 sshd[5691]: Accepted publickey for core from 10.0.0.1 port 34914 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:23.988905 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:23.993982 systemd-logind[1448]: New session 23 of user core. Oct 31 00:41:24.001766 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:41:24.116356 sshd[5691]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:24.120868 systemd[1]: sshd@22-10.0.0.63:22-10.0.0.1:34914.service: Deactivated successfully. Oct 31 00:41:24.123143 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:41:24.123897 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:41:24.124955 systemd-logind[1448]: Removed session 23. Oct 31 00:41:25.844730 kubelet[2501]: E1031 00:41:25.844635 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-vthwl" podUID="e0234a26-22e7-4dab-acf3-a0c995470142" Oct 31 00:41:25.845235 kubelet[2501]: E1031 00:41:25.845005 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76599bb565-s49cl" podUID="acc770d5-5267-4ed9-8f3a-c4a12b51e0b8" Oct 31 00:41:28.107736 kubelet[2501]: E1031 00:41:28.107701 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:41:28.843478 kubelet[2501]: E1031 00:41:28.843009 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6548595d47-2xk9x" podUID="d4810036-8734-4e5d-affc-6c36413b2262" Oct 31 00:41:29.132960 systemd[1]: Started sshd@23-10.0.0.63:22-10.0.0.1:34918.service - OpenSSH per-connection server daemon (10.0.0.1:34918). Oct 31 00:41:29.176599 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 34918 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:29.178603 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:29.183295 systemd-logind[1448]: New session 24 of user core. Oct 31 00:41:29.191848 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:41:29.318703 sshd[5728]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:29.322745 systemd[1]: sshd@23-10.0.0.63:22-10.0.0.1:34918.service: Deactivated successfully. Oct 31 00:41:29.324843 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:41:29.325403 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:41:29.326314 systemd-logind[1448]: Removed session 24. Oct 31 00:41:29.844163 kubelet[2501]: E1031 00:41:29.843410 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-796b6cb4bb-6pz6b" podUID="bb2918cc-8a31-4686-bd11-d009c753fde6" Oct 31 00:41:30.843600 kubelet[2501]: E1031 00:41:30.843534 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vkzq5" podUID="87a28487-9bca-4535-a48a-e42ddac97eba" Oct 31 00:41:33.844326 kubelet[2501]: E1031 00:41:33.844229 2501 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6gj62" podUID="b8404757-a167-4c06-a272-e0eda36ae575" Oct 31 00:41:34.338887 systemd[1]: Started sshd@24-10.0.0.63:22-10.0.0.1:34480.service - OpenSSH per-connection server daemon (10.0.0.1:34480). Oct 31 00:41:34.385052 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 34480 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:41:34.385746 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:41:34.396303 systemd-logind[1448]: New session 25 of user core. Oct 31 00:41:34.402824 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 00:41:34.533736 sshd[5745]: pam_unix(sshd:session): session closed for user core Oct 31 00:41:34.539216 systemd[1]: sshd@24-10.0.0.63:22-10.0.0.1:34480.service: Deactivated successfully. Oct 31 00:41:34.542068 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 00:41:34.543122 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Oct 31 00:41:34.544997 systemd-logind[1448]: Removed session 25.