Nov 4 04:57:06.427462 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 03:00:51 -00 2025 Nov 4 04:57:06.427509 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:57:06.427526 kernel: BIOS-provided physical RAM map: Nov 4 04:57:06.427533 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 04:57:06.427540 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 04:57:06.427547 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 04:57:06.427555 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 4 04:57:06.427562 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 4 04:57:06.427572 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 04:57:06.427579 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 04:57:06.427593 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 04:57:06.427600 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 04:57:06.427607 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 04:57:06.427614 kernel: NX (Execute Disable) protection: active Nov 4 04:57:06.427623 kernel: APIC: Static calls initialized Nov 4 04:57:06.427637 kernel: SMBIOS 2.8 present. Nov 4 04:57:06.427648 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 4 04:57:06.427655 kernel: DMI: Memory slots populated: 1/1 Nov 4 04:57:06.427663 kernel: Hypervisor detected: KVM Nov 4 04:57:06.427671 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 04:57:06.427678 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 04:57:06.427686 kernel: kvm-clock: using sched offset of 4393403781 cycles Nov 4 04:57:06.427695 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 04:57:06.427704 kernel: tsc: Detected 2794.750 MHz processor Nov 4 04:57:06.427720 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 04:57:06.427729 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 04:57:06.427737 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 04:57:06.427745 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 04:57:06.427754 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 04:57:06.427762 kernel: Using GB pages for direct mapping Nov 4 04:57:06.427770 kernel: ACPI: Early table checksum verification disabled Nov 4 04:57:06.427785 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 4 04:57:06.427793 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:06.427802 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:06.427810 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:06.427818 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 4 04:57:06.427826 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:06.427834 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:06.427849 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:06.427858 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:06.427874 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 4 04:57:06.427882 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 4 04:57:06.427890 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 4 04:57:06.427910 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 4 04:57:06.427919 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 4 04:57:06.427938 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 4 04:57:06.427947 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 4 04:57:06.427976 kernel: No NUMA configuration found Nov 4 04:57:06.427988 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 4 04:57:06.428000 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 4 04:57:06.428024 kernel: Zone ranges: Nov 4 04:57:06.428036 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 04:57:06.428047 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 4 04:57:06.428058 kernel: Normal empty Nov 4 04:57:06.428068 kernel: Device empty Nov 4 04:57:06.428078 kernel: Movable zone start for each node Nov 4 04:57:06.428088 kernel: Early memory node ranges Nov 4 04:57:06.428108 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 04:57:06.428118 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 4 04:57:06.428128 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 4 04:57:06.428138 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 04:57:06.428149 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 04:57:06.428160 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 4 04:57:06.428174 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 04:57:06.428185 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 04:57:06.428208 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 04:57:06.428217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 04:57:06.428228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 04:57:06.428237 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 04:57:06.428245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 04:57:06.428254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 04:57:06.428262 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 04:57:06.428277 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 04:57:06.428286 kernel: TSC deadline timer available Nov 4 04:57:06.428294 kernel: CPU topo: Max. logical packages: 1 Nov 4 04:57:06.428302 kernel: CPU topo: Max. logical dies: 1 Nov 4 04:57:06.428310 kernel: CPU topo: Max. dies per package: 1 Nov 4 04:57:06.428318 kernel: CPU topo: Max. threads per core: 1 Nov 4 04:57:06.428326 kernel: CPU topo: Num. cores per package: 4 Nov 4 04:57:06.428341 kernel: CPU topo: Num. threads per package: 4 Nov 4 04:57:06.428349 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 04:57:06.428357 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 04:57:06.428365 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 04:57:06.428374 kernel: kvm-guest: setup PV sched yield Nov 4 04:57:06.428382 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 04:57:06.428390 kernel: Booting paravirtualized kernel on KVM Nov 4 04:57:06.428399 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 04:57:06.428414 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 04:57:06.428423 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 04:57:06.428431 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 04:57:06.428439 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 04:57:06.428447 kernel: kvm-guest: PV spinlocks enabled Nov 4 04:57:06.428455 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 04:57:06.428465 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:57:06.428481 kernel: random: crng init done Nov 4 04:57:06.428489 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 04:57:06.428498 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 04:57:06.428506 kernel: Fallback order for Node 0: 0 Nov 4 04:57:06.428514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 4 04:57:06.428522 kernel: Policy zone: DMA32 Nov 4 04:57:06.428531 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 04:57:06.428546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 04:57:06.428554 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 04:57:06.428562 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 04:57:06.428571 kernel: Dynamic Preempt: voluntary Nov 4 04:57:06.428579 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 04:57:06.428588 kernel: rcu: RCU event tracing is enabled. Nov 4 04:57:06.428597 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 04:57:06.428612 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 04:57:06.428622 kernel: Rude variant of Tasks RCU enabled. Nov 4 04:57:06.428630 kernel: Tracing variant of Tasks RCU enabled. Nov 4 04:57:06.428638 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 04:57:06.428646 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 04:57:06.428655 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 04:57:06.428663 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 04:57:06.428672 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 04:57:06.428687 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 04:57:06.428695 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 04:57:06.428723 kernel: Console: colour VGA+ 80x25 Nov 4 04:57:06.428738 kernel: printk: legacy console [ttyS0] enabled Nov 4 04:57:06.428746 kernel: ACPI: Core revision 20240827 Nov 4 04:57:06.428755 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 04:57:06.428763 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 04:57:06.428772 kernel: x2apic enabled Nov 4 04:57:06.428780 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 04:57:06.428798 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 04:57:06.428807 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 04:57:06.428815 kernel: kvm-guest: setup PV IPIs Nov 4 04:57:06.428824 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 04:57:06.428839 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 04:57:06.428848 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 4 04:57:06.428857 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 04:57:06.428865 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 04:57:06.428874 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 04:57:06.428883 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 04:57:06.428891 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 04:57:06.428907 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 04:57:06.428915 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 04:57:06.428931 kernel: active return thunk: retbleed_return_thunk Nov 4 04:57:06.428940 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 04:57:06.428993 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 04:57:06.429003 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 04:57:06.429011 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 04:57:06.429077 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 04:57:06.429085 kernel: active return thunk: srso_return_thunk Nov 4 04:57:06.429094 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 04:57:06.429103 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 04:57:06.429111 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 04:57:06.429120 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 04:57:06.429128 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 04:57:06.429180 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 04:57:06.429192 kernel: Freeing SMP alternatives memory: 32K Nov 4 04:57:06.429204 kernel: pid_max: default: 32768 minimum: 301 Nov 4 04:57:06.429216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 04:57:06.429228 kernel: landlock: Up and running. Nov 4 04:57:06.429240 kernel: SELinux: Initializing. Nov 4 04:57:06.429255 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:57:06.429277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:57:06.429288 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 04:57:06.429312 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 04:57:06.429330 kernel: ... version: 0 Nov 4 04:57:06.429342 kernel: ... bit width: 48 Nov 4 04:57:06.429353 kernel: ... generic registers: 6 Nov 4 04:57:06.429379 kernel: ... value mask: 0000ffffffffffff Nov 4 04:57:06.429401 kernel: ... max period: 00007fffffffffff Nov 4 04:57:06.429412 kernel: ... fixed-purpose events: 0 Nov 4 04:57:06.429423 kernel: ... event mask: 000000000000003f Nov 4 04:57:06.429434 kernel: signal: max sigframe size: 1776 Nov 4 04:57:06.429446 kernel: rcu: Hierarchical SRCU implementation. Nov 4 04:57:06.429458 kernel: rcu: Max phase no-delay instances is 400. Nov 4 04:57:06.429467 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 04:57:06.429485 kernel: smp: Bringing up secondary CPUs ... Nov 4 04:57:06.429494 kernel: smpboot: x86: Booting SMP configuration: Nov 4 04:57:06.429502 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 04:57:06.429511 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 04:57:06.429519 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 4 04:57:06.429529 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 118472K reserved, 0K cma-reserved) Nov 4 04:57:06.429537 kernel: devtmpfs: initialized Nov 4 04:57:06.429553 kernel: x86/mm: Memory block size: 128MB Nov 4 04:57:06.429562 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 04:57:06.429570 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 04:57:06.429579 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 04:57:06.429588 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 04:57:06.429596 kernel: audit: initializing netlink subsys (disabled) Nov 4 04:57:06.429605 kernel: audit: type=2000 audit(1762232223.286:1): state=initialized audit_enabled=0 res=1 Nov 4 04:57:06.429620 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 04:57:06.429629 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 04:57:06.429637 kernel: cpuidle: using governor menu Nov 4 04:57:06.429645 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 04:57:06.429654 kernel: dca service started, version 1.12.1 Nov 4 04:57:06.429662 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 04:57:06.429671 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 4 04:57:06.429680 kernel: PCI: Using configuration type 1 for base access Nov 4 04:57:06.429695 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 04:57:06.429704 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 04:57:06.429712 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 04:57:06.429721 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 04:57:06.429730 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 04:57:06.429738 kernel: ACPI: Added _OSI(Module Device) Nov 4 04:57:06.429747 kernel: ACPI: Added _OSI(Processor Device) Nov 4 04:57:06.429762 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 04:57:06.429771 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 04:57:06.429779 kernel: ACPI: Interpreter enabled Nov 4 04:57:06.429788 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 04:57:06.429796 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 04:57:06.429805 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 04:57:06.429814 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 04:57:06.429829 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 04:57:06.429837 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 04:57:06.430211 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 04:57:06.430400 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 04:57:06.430667 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 04:57:06.430686 kernel: PCI host bridge to bus 0000:00 Nov 4 04:57:06.430939 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 04:57:06.431127 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 04:57:06.431311 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 04:57:06.431506 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 4 04:57:06.431671 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 04:57:06.431848 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 4 04:57:06.432041 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 04:57:06.432244 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 04:57:06.432487 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 04:57:06.432980 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 4 04:57:06.433491 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 4 04:57:06.433719 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 4 04:57:06.433966 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 04:57:06.434169 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 04:57:06.434351 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 4 04:57:06.434527 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 4 04:57:06.434720 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 4 04:57:06.434943 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 04:57:06.435182 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 4 04:57:06.435363 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 4 04:57:06.435539 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 4 04:57:06.435744 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 04:57:06.435976 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 4 04:57:06.436196 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 4 04:57:06.436413 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 4 04:57:06.436599 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 4 04:57:06.436789 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 04:57:06.436995 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 04:57:06.437199 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 04:57:06.437373 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 4 04:57:06.437582 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 4 04:57:06.437791 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 04:57:06.437997 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 04:57:06.438026 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 04:57:06.438037 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 04:57:06.438047 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 04:57:06.438055 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 04:57:06.438064 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 04:57:06.438073 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 04:57:06.438081 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 04:57:06.438097 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 04:57:06.438106 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 04:57:06.438114 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 04:57:06.438123 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 04:57:06.438132 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 04:57:06.438141 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 04:57:06.438149 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 04:57:06.438164 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 04:57:06.438173 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 04:57:06.438268 kernel: iommu: Default domain type: Translated Nov 4 04:57:06.438281 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 04:57:06.438293 kernel: PCI: Using ACPI for IRQ routing Nov 4 04:57:06.438305 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 04:57:06.438316 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 04:57:06.438341 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 4 04:57:06.438549 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 04:57:06.438775 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 04:57:06.439013 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 04:57:06.439027 kernel: vgaarb: loaded Nov 4 04:57:06.439036 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 04:57:06.439045 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 04:57:06.439068 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 04:57:06.439077 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 04:57:06.439086 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 04:57:06.439094 kernel: pnp: PnP ACPI init Nov 4 04:57:06.439288 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 04:57:06.439301 kernel: pnp: PnP ACPI: found 6 devices Nov 4 04:57:06.439310 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 04:57:06.439330 kernel: NET: Registered PF_INET protocol family Nov 4 04:57:06.439338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 04:57:06.439354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 04:57:06.439372 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 04:57:06.439381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 04:57:06.439390 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 04:57:06.439398 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 04:57:06.439420 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:57:06.439428 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:57:06.439437 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 04:57:06.439446 kernel: NET: Registered PF_XDP protocol family Nov 4 04:57:06.439653 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 04:57:06.439848 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 04:57:06.440082 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 04:57:06.440275 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 4 04:57:06.440485 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 04:57:06.440681 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 4 04:57:06.440696 kernel: PCI: CLS 0 bytes, default 64 Nov 4 04:57:06.440705 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 04:57:06.440714 kernel: Initialise system trusted keyrings Nov 4 04:57:06.440738 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 04:57:06.440747 kernel: Key type asymmetric registered Nov 4 04:57:06.440756 kernel: Asymmetric key parser 'x509' registered Nov 4 04:57:06.440765 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 04:57:06.440774 kernel: io scheduler mq-deadline registered Nov 4 04:57:06.440782 kernel: io scheduler kyber registered Nov 4 04:57:06.440791 kernel: io scheduler bfq registered Nov 4 04:57:06.440808 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 04:57:06.440817 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 04:57:06.440826 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 04:57:06.440835 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 04:57:06.440844 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 04:57:06.440852 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 04:57:06.440861 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 04:57:06.440877 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 04:57:06.440885 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 04:57:06.441118 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 04:57:06.441321 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 04:57:06.441335 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 04:57:06.441500 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T04:57:04 UTC (1762232224) Nov 4 04:57:06.441715 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 04:57:06.441751 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 04:57:06.441764 kernel: NET: Registered PF_INET6 protocol family Nov 4 04:57:06.441775 kernel: Segment Routing with IPv6 Nov 4 04:57:06.441787 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 04:57:06.441799 kernel: NET: Registered PF_PACKET protocol family Nov 4 04:57:06.441811 kernel: Key type dns_resolver registered Nov 4 04:57:06.441823 kernel: IPI shorthand broadcast: enabled Nov 4 04:57:06.441845 kernel: sched_clock: Marking stable (2008012629, 218457267)->(2402757530, -176287634) Nov 4 04:57:06.441857 kernel: registered taskstats version 1 Nov 4 04:57:06.441869 kernel: Loading compiled-in X.509 certificates Nov 4 04:57:06.441881 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dafbe857b8ef9eaad4381fdddb57853ce023547e' Nov 4 04:57:06.441893 kernel: Demotion targets for Node 0: null Nov 4 04:57:06.441905 kernel: Key type .fscrypt registered Nov 4 04:57:06.441917 kernel: Key type fscrypt-provisioning registered Nov 4 04:57:06.441947 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 04:57:06.441970 kernel: ima: Allocated hash algorithm: sha1 Nov 4 04:57:06.441979 kernel: ima: No architecture policies found Nov 4 04:57:06.441987 kernel: clk: Disabling unused clocks Nov 4 04:57:06.441996 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 04:57:06.442005 kernel: Write protecting the kernel read-only data: 45056k Nov 4 04:57:06.442014 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 04:57:06.442030 kernel: Run /init as init process Nov 4 04:57:06.442039 kernel: with arguments: Nov 4 04:57:06.442048 kernel: /init Nov 4 04:57:06.442056 kernel: with environment: Nov 4 04:57:06.442065 kernel: HOME=/ Nov 4 04:57:06.442073 kernel: TERM=linux Nov 4 04:57:06.442082 kernel: SCSI subsystem initialized Nov 4 04:57:06.442090 kernel: libata version 3.00 loaded. Nov 4 04:57:06.442296 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 04:57:06.442377 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 04:57:06.442594 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 04:57:06.442777 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 04:57:06.442996 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 04:57:06.443344 kernel: scsi host0: ahci Nov 4 04:57:06.443546 kernel: scsi host1: ahci Nov 4 04:57:06.443737 kernel: scsi host2: ahci Nov 4 04:57:06.443979 kernel: scsi host3: ahci Nov 4 04:57:06.444220 kernel: scsi host4: ahci Nov 4 04:57:06.444432 kernel: scsi host5: ahci Nov 4 04:57:06.444446 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 4 04:57:06.444456 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 4 04:57:06.444465 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 4 04:57:06.444475 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 4 04:57:06.444484 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 4 04:57:06.444493 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 4 04:57:06.444517 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 04:57:06.444527 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 04:57:06.444536 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 04:57:06.444545 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 04:57:06.444560 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 04:57:06.444570 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 04:57:06.444579 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 04:57:06.444594 kernel: ata3.00: applying bridge limits Nov 4 04:57:06.444603 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 04:57:06.444612 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 04:57:06.444621 kernel: ata3.00: configured for UDMA/100 Nov 4 04:57:06.444856 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 04:57:06.445081 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 04:57:06.445302 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 04:57:06.445320 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 04:57:06.445332 kernel: GPT:16515071 != 27000831 Nov 4 04:57:06.445344 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 04:57:06.445356 kernel: GPT:16515071 != 27000831 Nov 4 04:57:06.445367 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 04:57:06.445378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 04:57:06.445610 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 04:57:06.445625 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 04:57:06.445819 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 04:57:06.445831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 04:57:06.445841 kernel: device-mapper: uevent: version 1.0.3 Nov 4 04:57:06.445850 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 04:57:06.445859 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 04:57:06.445881 kernel: raid6: avx2x4 gen() 27594 MB/s Nov 4 04:57:06.445890 kernel: raid6: avx2x2 gen() 28350 MB/s Nov 4 04:57:06.445899 kernel: raid6: avx2x1 gen() 24915 MB/s Nov 4 04:57:06.445908 kernel: raid6: using algorithm avx2x2 gen() 28350 MB/s Nov 4 04:57:06.445931 kernel: raid6: .... xor() 18826 MB/s, rmw enabled Nov 4 04:57:06.445940 kernel: raid6: using avx2x2 recovery algorithm Nov 4 04:57:06.445963 kernel: xor: automatically using best checksumming function avx Nov 4 04:57:06.445973 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 04:57:06.445982 kernel: BTRFS: device fsid 6f0a5369-79b6-4a87-b9a6-85ec05be306c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (180) Nov 4 04:57:06.445991 kernel: BTRFS info (device dm-0): first mount of filesystem 6f0a5369-79b6-4a87-b9a6-85ec05be306c Nov 4 04:57:06.446001 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:06.446017 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 04:57:06.446026 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 04:57:06.446042 kernel: loop: module loaded Nov 4 04:57:06.446051 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 04:57:06.446060 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 04:57:06.446071 systemd[1]: Successfully made /usr/ read-only. Nov 4 04:57:06.446084 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:57:06.446100 systemd[1]: Detected virtualization kvm. Nov 4 04:57:06.446110 systemd[1]: Detected architecture x86-64. Nov 4 04:57:06.446119 systemd[1]: Running in initrd. Nov 4 04:57:06.446128 systemd[1]: No hostname configured, using default hostname. Nov 4 04:57:06.446144 systemd[1]: Hostname set to . Nov 4 04:57:06.446160 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:57:06.446188 systemd[1]: Queued start job for default target initrd.target. Nov 4 04:57:06.446201 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:57:06.446211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:57:06.446220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:57:06.446231 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 04:57:06.446240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:57:06.446259 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 04:57:06.446269 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 04:57:06.446279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:57:06.446289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:57:06.446298 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:57:06.446308 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:57:06.446324 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:57:06.446334 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:57:06.446343 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:57:06.446352 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:57:06.446362 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:57:06.446371 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 04:57:06.446381 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 04:57:06.446397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:57:06.446413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:57:06.446423 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:57:06.446432 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:57:06.446442 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 04:57:06.446460 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 04:57:06.446476 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:57:06.446506 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 04:57:06.446519 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 04:57:06.446533 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 04:57:06.446545 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:57:06.446558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:57:06.446572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:57:06.446596 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 04:57:06.446609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:57:06.446623 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 04:57:06.446637 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:57:06.446695 systemd-journald[316]: Collecting audit messages is disabled. Nov 4 04:57:06.446739 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 04:57:06.446753 systemd-journald[316]: Journal started Nov 4 04:57:06.446789 systemd-journald[316]: Runtime Journal (/run/log/journal/8b13912fe01e45c7add4bc2ad99fa63a) is 6M, max 48.2M, 42.2M free. Nov 4 04:57:06.456998 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:57:06.465268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:57:06.467366 kernel: Bridge firewalling registered Nov 4 04:57:06.466486 systemd-modules-load[319]: Inserted module 'br_netfilter' Nov 4 04:57:06.469016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:57:06.471368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:57:06.506385 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:57:06.511331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:57:06.520880 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 04:57:06.588683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:57:06.599402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:06.604555 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:57:06.611124 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 04:57:06.627268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:57:06.631720 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:57:06.654761 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:57:06.663725 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 04:57:06.705633 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:57:06.720724 systemd-resolved[348]: Positive Trust Anchors: Nov 4 04:57:06.720751 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:57:06.720756 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:57:06.720799 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:57:06.763810 systemd-resolved[348]: Defaulting to hostname 'linux'. Nov 4 04:57:06.766250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:57:06.770062 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:57:06.853003 kernel: Loading iSCSI transport class v2.0-870. Nov 4 04:57:06.869000 kernel: iscsi: registered transport (tcp) Nov 4 04:57:06.901733 kernel: iscsi: registered transport (qla4xxx) Nov 4 04:57:06.901825 kernel: QLogic iSCSI HBA Driver Nov 4 04:57:06.935668 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:57:06.991881 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:57:06.993324 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:57:07.068372 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 04:57:07.071108 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 04:57:07.073392 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 04:57:07.123667 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:57:07.139797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:57:07.187697 systemd-udevd[593]: Using default interface naming scheme 'v257'. Nov 4 04:57:07.209378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:57:07.214293 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 04:57:07.243285 dracut-pre-trigger[663]: rd.md=0: removing MD RAID activation Nov 4 04:57:07.252766 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:57:07.259605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:57:07.282193 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:57:07.286393 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:57:07.327017 systemd-networkd[714]: lo: Link UP Nov 4 04:57:07.327026 systemd-networkd[714]: lo: Gained carrier Nov 4 04:57:07.327683 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:57:07.329428 systemd[1]: Reached target network.target - Network. Nov 4 04:57:07.401865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:57:07.409505 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 04:57:07.464706 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 04:57:07.474888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 04:57:07.503735 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 04:57:07.518597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 04:57:07.533827 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 04:57:07.539991 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 04:57:07.547529 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:57:07.554614 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 04:57:07.554653 kernel: AES CTR mode by8 optimization enabled Nov 4 04:57:07.547537 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:57:07.548031 systemd-networkd[714]: eth0: Link UP Nov 4 04:57:07.549344 systemd-networkd[714]: eth0: Gained carrier Nov 4 04:57:07.549354 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:57:07.571035 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 04:57:07.574481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:57:07.574613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:07.627985 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:57:07.638972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:57:07.748886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:07.814507 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 04:57:07.817405 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:57:07.820695 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:57:07.823011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:57:07.826081 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 04:57:07.924077 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:57:08.026140 disk-uuid[774]: Primary Header is updated. Nov 4 04:57:08.026140 disk-uuid[774]: Secondary Entries is updated. Nov 4 04:57:08.026140 disk-uuid[774]: Secondary Header is updated. Nov 4 04:57:09.151914 disk-uuid[852]: Warning: The kernel is still using the old partition table. Nov 4 04:57:09.151914 disk-uuid[852]: The new table will be used at the next reboot or after you Nov 4 04:57:09.151914 disk-uuid[852]: run partprobe(8) or kpartx(8) Nov 4 04:57:09.151914 disk-uuid[852]: The operation has completed successfully. Nov 4 04:57:09.327470 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 04:57:09.327626 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 04:57:09.332539 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 04:57:09.381422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Nov 4 04:57:09.381482 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:09.381526 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:09.387751 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:57:09.387860 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:57:09.397996 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:09.399661 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 04:57:09.404530 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 04:57:09.508618 systemd-networkd[714]: eth0: Gained IPv6LL Nov 4 04:57:09.664717 ignition[881]: Ignition 2.22.0 Nov 4 04:57:09.664733 ignition[881]: Stage: fetch-offline Nov 4 04:57:09.664801 ignition[881]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:09.664815 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:57:09.664944 ignition[881]: parsed url from cmdline: "" Nov 4 04:57:09.664966 ignition[881]: no config URL provided Nov 4 04:57:09.664973 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:57:09.664987 ignition[881]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:57:09.665035 ignition[881]: op(1): [started] loading QEMU firmware config module Nov 4 04:57:09.665040 ignition[881]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 04:57:09.677911 ignition[881]: op(1): [finished] loading QEMU firmware config module Nov 4 04:57:09.766296 ignition[881]: parsing config with SHA512: 964b4eb8d21b26bab90bb0fb5e653de194a61bfc24a5909db449bf5d36086e39b90433d0f50ffa652af3ce350e547d6704aebe6b64d4b0e4668c0fd4fecf2bad Nov 4 04:57:09.770874 unknown[881]: fetched base config from "system" Nov 4 04:57:09.770904 unknown[881]: fetched user config from "qemu" Nov 4 04:57:09.772323 ignition[881]: fetch-offline: fetch-offline passed Nov 4 04:57:09.772414 ignition[881]: Ignition finished successfully Nov 4 04:57:09.775205 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:57:09.779873 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 04:57:09.781606 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 04:57:09.853091 ignition[891]: Ignition 2.22.0 Nov 4 04:57:09.853107 ignition[891]: Stage: kargs Nov 4 04:57:09.853270 ignition[891]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:09.853281 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:57:09.854072 ignition[891]: kargs: kargs passed Nov 4 04:57:09.854127 ignition[891]: Ignition finished successfully Nov 4 04:57:09.860124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 04:57:09.864657 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 04:57:09.925932 ignition[899]: Ignition 2.22.0 Nov 4 04:57:09.925945 ignition[899]: Stage: disks Nov 4 04:57:09.926098 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:09.926108 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:57:09.930376 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 04:57:09.926804 ignition[899]: disks: disks passed Nov 4 04:57:09.932700 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 04:57:09.926856 ignition[899]: Ignition finished successfully Nov 4 04:57:09.935980 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 04:57:09.939652 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:57:09.941444 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:57:09.944160 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:57:09.948726 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 04:57:09.997232 systemd-fsck[909]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 04:57:10.006151 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 04:57:10.009120 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 04:57:10.132014 kernel: EXT4-fs (vda9): mounted filesystem c35327fb-3cdd-496e-85aa-9e1b4133507f r/w with ordered data mode. Quota mode: none. Nov 4 04:57:10.132856 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 04:57:10.134337 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 04:57:10.139483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:57:10.141771 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 04:57:10.143775 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 04:57:10.143815 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 04:57:10.143845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:57:10.165966 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 04:57:10.169542 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 04:57:10.179141 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (917) Nov 4 04:57:10.179170 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:10.179182 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:10.183506 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:57:10.183535 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:57:10.185119 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:57:10.255365 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 04:57:10.262651 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Nov 4 04:57:10.267850 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 04:57:10.272842 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 04:57:10.397411 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 04:57:10.420393 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 04:57:10.423240 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 04:57:10.443104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 04:57:10.445862 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:10.469140 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 04:57:10.498257 ignition[1030]: INFO : Ignition 2.22.0 Nov 4 04:57:10.498257 ignition[1030]: INFO : Stage: mount Nov 4 04:57:10.501280 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:10.501280 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:57:10.501280 ignition[1030]: INFO : mount: mount passed Nov 4 04:57:10.501280 ignition[1030]: INFO : Ignition finished successfully Nov 4 04:57:10.504500 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 04:57:10.507051 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 04:57:10.537897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:57:10.571985 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1044) Nov 4 04:57:10.572031 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:10.573466 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:10.577412 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:57:10.577484 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:57:10.579355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:57:10.683195 ignition[1061]: INFO : Ignition 2.22.0 Nov 4 04:57:10.683195 ignition[1061]: INFO : Stage: files Nov 4 04:57:10.694780 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:10.694780 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:57:10.698892 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Nov 4 04:57:10.701368 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 04:57:10.701368 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 04:57:10.711067 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 04:57:10.713511 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 04:57:10.716252 unknown[1061]: wrote ssh authorized keys file for user: core Nov 4 04:57:10.718121 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 04:57:10.720904 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 04:57:10.724131 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 04:57:10.782615 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 04:57:10.914314 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:57:10.918064 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:57:11.010823 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:57:11.014548 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:57:11.014548 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:57:11.159108 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:57:11.159108 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:57:11.198206 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 04:57:11.716875 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 04:57:13.133571 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:57:13.133571 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 04:57:13.141355 ignition[1061]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 04:57:13.220680 ignition[1061]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 04:57:13.226159 ignition[1061]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 04:57:13.229105 ignition[1061]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 04:57:13.229105 ignition[1061]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 04:57:13.229105 ignition[1061]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 04:57:13.229105 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:57:13.229105 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:57:13.229105 ignition[1061]: INFO : files: files passed Nov 4 04:57:13.229105 ignition[1061]: INFO : Ignition finished successfully Nov 4 04:57:13.235516 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 04:57:13.237928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 04:57:13.244396 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 04:57:13.279800 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 04:57:13.280036 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 04:57:13.289528 initrd-setup-root-after-ignition[1092]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 04:57:13.294854 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:57:13.294854 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:57:13.301188 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:57:13.305793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:57:13.306744 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 04:57:13.312336 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 04:57:13.388417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 04:57:13.388595 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 04:57:13.392927 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 04:57:13.396687 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 04:57:13.402496 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 04:57:13.405573 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 04:57:13.448087 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:57:13.451170 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 04:57:13.475800 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:57:13.476010 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:57:13.479854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:57:13.480811 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 04:57:13.486597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 04:57:13.486814 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:57:13.492285 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 04:57:13.493510 systemd[1]: Stopped target basic.target - Basic System. Nov 4 04:57:13.498487 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 04:57:13.501513 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:57:13.504764 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 04:57:13.508570 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:57:13.512709 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 04:57:13.513626 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:57:13.519799 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 04:57:13.520650 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 04:57:13.530763 systemd[1]: Stopped target swap.target - Swaps. Nov 4 04:57:13.535021 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 04:57:13.535221 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:57:13.541241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:57:13.542617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:57:13.549591 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 04:57:13.551424 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:57:13.552475 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 04:57:13.552645 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 04:57:13.559007 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 04:57:13.559142 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:57:13.560040 systemd[1]: Stopped target paths.target - Path Units. Nov 4 04:57:13.564675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 04:57:13.571057 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:57:13.572054 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 04:57:13.577731 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 04:57:13.578826 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 04:57:13.579054 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:57:13.583646 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 04:57:13.583751 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:57:13.586498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 04:57:13.586622 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:57:13.589978 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 04:57:13.590114 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 04:57:13.600032 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 04:57:13.602647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 04:57:13.610223 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 04:57:13.612222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:57:13.616645 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 04:57:13.618408 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:57:13.622631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 04:57:13.622825 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:57:13.636138 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 04:57:13.637919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 04:57:13.649653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 04:57:13.657548 ignition[1118]: INFO : Ignition 2.22.0 Nov 4 04:57:13.657548 ignition[1118]: INFO : Stage: umount Nov 4 04:57:13.660469 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:13.660469 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:57:13.660469 ignition[1118]: INFO : umount: umount passed Nov 4 04:57:13.660469 ignition[1118]: INFO : Ignition finished successfully Nov 4 04:57:13.664232 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 04:57:13.664374 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 04:57:13.666124 systemd[1]: Stopped target network.target - Network. Nov 4 04:57:13.666864 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 04:57:13.666938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 04:57:13.673520 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 04:57:13.673583 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 04:57:13.678021 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 04:57:13.678085 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 04:57:13.681270 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 04:57:13.681335 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 04:57:13.682445 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 04:57:13.683032 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 04:57:13.691926 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 04:57:13.692182 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 04:57:13.703901 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 04:57:13.704094 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 04:57:13.713453 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 04:57:13.714538 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 04:57:13.714590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:57:13.721001 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 04:57:13.724637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 04:57:13.724707 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:57:13.728574 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 04:57:13.728647 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:57:13.733250 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 04:57:13.733323 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 04:57:13.739161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:57:13.743322 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 04:57:13.750380 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 04:57:13.752451 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 04:57:13.752940 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 04:57:13.773605 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 04:57:13.773970 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:57:13.775306 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 04:57:13.775370 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 04:57:13.776047 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 04:57:13.776098 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:57:13.777292 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 04:57:13.777436 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:57:13.789975 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 04:57:13.790120 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 04:57:13.795442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 04:57:13.795518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:57:13.801086 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 04:57:13.801939 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 04:57:13.802055 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:57:13.803018 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 04:57:13.803094 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:57:13.813798 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 04:57:13.813982 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:57:13.814985 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 04:57:13.815123 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:57:13.815844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:57:13.815930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:13.993562 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 04:57:13.993867 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 04:57:13.996145 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 04:57:13.996314 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 04:57:14.006775 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 04:57:14.009062 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 04:57:14.033507 systemd[1]: Switching root. Nov 4 04:57:14.080883 systemd-journald[316]: Journal stopped Nov 4 04:57:16.538656 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Nov 4 04:57:16.538744 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 04:57:16.538760 kernel: SELinux: policy capability open_perms=1 Nov 4 04:57:16.538773 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 04:57:16.538786 kernel: SELinux: policy capability always_check_network=0 Nov 4 04:57:16.538809 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 04:57:16.538831 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 04:57:16.538845 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 04:57:16.538858 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 04:57:16.538875 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 04:57:16.538891 kernel: audit: type=1403 audit(1762232235.335:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 04:57:16.538905 systemd[1]: Successfully loaded SELinux policy in 74.901ms. Nov 4 04:57:16.538930 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.349ms. Nov 4 04:57:16.538971 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:57:16.538990 systemd[1]: Detected virtualization kvm. Nov 4 04:57:16.539003 systemd[1]: Detected architecture x86-64. Nov 4 04:57:16.539016 systemd[1]: Detected first boot. Nov 4 04:57:16.539029 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:57:16.539042 zram_generator::config[1165]: No configuration found. Nov 4 04:57:16.539057 kernel: Guest personality initialized and is inactive Nov 4 04:57:16.539078 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 04:57:16.539092 kernel: Initialized host personality Nov 4 04:57:16.539114 kernel: NET: Registered PF_VSOCK protocol family Nov 4 04:57:16.539127 systemd[1]: Populated /etc with preset unit settings. Nov 4 04:57:16.539143 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 04:57:16.539159 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 04:57:16.539172 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 04:57:16.539193 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 04:57:16.539207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 04:57:16.539221 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 04:57:16.539233 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 04:57:16.539247 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 04:57:16.539269 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 04:57:16.539283 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 04:57:16.539304 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 04:57:16.539319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:57:16.539333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:57:16.539346 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 04:57:16.539360 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 04:57:16.539377 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 04:57:16.539391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:57:16.539415 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 04:57:16.539428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:57:16.539442 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:57:16.539455 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 04:57:16.539475 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 04:57:16.539488 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 04:57:16.539511 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 04:57:16.539534 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:57:16.539548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:57:16.539561 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:57:16.539574 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:57:16.539588 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 04:57:16.539605 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 04:57:16.539626 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 04:57:16.539640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:57:16.539653 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:57:16.539667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:57:16.539680 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 04:57:16.539694 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 04:57:16.539707 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 04:57:16.539721 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 04:57:16.539743 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:16.539756 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 04:57:16.539770 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 04:57:16.539783 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 04:57:16.539807 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 04:57:16.539821 systemd[1]: Reached target machines.target - Containers. Nov 4 04:57:16.539844 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 04:57:16.539858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:16.539871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:57:16.539885 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 04:57:16.539899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:57:16.539912 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:57:16.539927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:57:16.539968 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 04:57:16.539983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:57:16.539997 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 04:57:16.540012 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 04:57:16.540028 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 04:57:16.540041 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 04:57:16.540054 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 04:57:16.540077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:16.540090 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:57:16.540104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:57:16.540116 kernel: fuse: init (API version 7.41) Nov 4 04:57:16.540151 systemd-journald[1229]: Collecting audit messages is disabled. Nov 4 04:57:16.540184 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:57:16.540208 systemd-journald[1229]: Journal started Nov 4 04:57:16.540231 systemd-journald[1229]: Runtime Journal (/run/log/journal/8b13912fe01e45c7add4bc2ad99fa63a) is 6M, max 48.2M, 42.2M free. Nov 4 04:57:16.188432 systemd[1]: Queued start job for default target multi-user.target. Nov 4 04:57:16.201424 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 04:57:16.202120 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 04:57:16.202560 systemd[1]: systemd-journald.service: Consumed 1.062s CPU time. Nov 4 04:57:16.547144 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 04:57:16.560260 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 04:57:16.563108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:57:16.569031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:16.574202 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:57:16.576774 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 04:57:16.578643 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 04:57:16.580721 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 04:57:16.583699 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 04:57:16.584990 kernel: ACPI: bus type drm_connector registered Nov 4 04:57:16.586252 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 04:57:16.588284 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 04:57:16.590233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:57:16.592758 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 04:57:16.593014 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 04:57:16.595323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:57:16.595540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:57:16.597726 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:57:16.597977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:57:16.600135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:57:16.600349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:57:16.602634 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 04:57:16.602860 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 04:57:16.604915 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:57:16.605151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:57:16.640816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:57:16.643112 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 04:57:16.662661 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 04:57:16.686282 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 04:57:16.690470 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 04:57:16.693099 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 04:57:16.693156 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:57:16.720038 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 04:57:16.722946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:16.730175 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 04:57:16.736153 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 04:57:16.738515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:57:16.740653 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 04:57:16.743284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:57:16.747572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:57:16.751106 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 04:57:16.754557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:57:16.759686 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:57:16.782182 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 04:57:16.786364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 04:57:16.789103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:57:16.791276 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 04:57:16.820860 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:57:16.829866 systemd-journald[1229]: Time spent on flushing to /var/log/journal/8b13912fe01e45c7add4bc2ad99fa63a is 32.325ms for 969 entries. Nov 4 04:57:16.829866 systemd-journald[1229]: System Journal (/var/log/journal/8b13912fe01e45c7add4bc2ad99fa63a) is 8M, max 163.5M, 155.5M free. Nov 4 04:57:16.971902 systemd-journald[1229]: Received client request to flush runtime journal. Nov 4 04:57:16.972082 kernel: loop1: detected capacity change from 0 to 119080 Nov 4 04:57:16.904829 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Nov 4 04:57:16.904843 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Nov 4 04:57:16.905848 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 04:57:16.929833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:57:16.933490 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:57:16.938148 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 04:57:16.943145 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 04:57:16.948268 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 04:57:16.956144 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 04:57:16.975211 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 04:57:16.979981 kernel: loop2: detected capacity change from 0 to 111544 Nov 4 04:57:16.995558 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 04:57:17.011050 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 04:57:17.015227 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:57:17.020004 kernel: loop3: detected capacity change from 0 to 229808 Nov 4 04:57:17.020226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:57:17.041680 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 04:57:17.053146 kernel: loop4: detected capacity change from 0 to 119080 Nov 4 04:57:17.055541 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 4 04:57:17.055571 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 4 04:57:17.062762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:57:17.067993 kernel: loop5: detected capacity change from 0 to 111544 Nov 4 04:57:17.077987 kernel: loop6: detected capacity change from 0 to 229808 Nov 4 04:57:17.085624 (sd-merge)[1309]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 04:57:17.089728 (sd-merge)[1309]: Merged extensions into '/usr'. Nov 4 04:57:17.094939 systemd[1]: Reload requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 04:57:17.095274 systemd[1]: Reloading... Nov 4 04:57:17.195986 zram_generator::config[1346]: No configuration found. Nov 4 04:57:17.212662 systemd-resolved[1304]: Positive Trust Anchors: Nov 4 04:57:17.212683 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:57:17.212689 systemd-resolved[1304]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:57:17.212734 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:57:17.219669 systemd-resolved[1304]: Defaulting to hostname 'linux'. Nov 4 04:57:17.419973 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 04:57:17.420255 systemd[1]: Reloading finished in 324 ms. Nov 4 04:57:17.454080 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 04:57:17.456437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:57:17.458739 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 04:57:17.464260 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:57:17.492825 systemd[1]: Starting ensure-sysext.service... Nov 4 04:57:17.523024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:57:17.560615 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 04:57:17.560663 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 04:57:17.561127 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 04:57:17.561454 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 04:57:17.562875 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 04:57:17.563343 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 4 04:57:17.563440 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 4 04:57:17.571997 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Nov 4 04:57:17.572020 systemd[1]: Reloading... Nov 4 04:57:17.594206 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:57:17.594219 systemd-tmpfiles[1380]: Skipping /boot Nov 4 04:57:17.611520 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:57:17.613277 systemd-tmpfiles[1380]: Skipping /boot Nov 4 04:57:17.628989 zram_generator::config[1407]: No configuration found. Nov 4 04:57:17.870410 systemd[1]: Reloading finished in 297 ms. Nov 4 04:57:17.918933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:57:17.931146 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:57:17.935613 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 04:57:17.939340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 04:57:17.945252 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 04:57:17.957446 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 04:57:17.963432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:17.963613 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:17.969020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:57:17.975318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:57:17.980317 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:57:17.982330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:17.982482 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:17.982625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:17.984519 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 04:57:17.986427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:57:17.986642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:57:17.992884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:57:17.993253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:57:17.997946 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:57:17.998215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:57:18.012422 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 04:57:18.025975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:18.026349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:18.031274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:57:18.034890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:57:18.040631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:57:18.043103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:18.043500 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:18.050594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:57:18.053518 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:18.060932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:57:18.061324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:57:18.067292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:57:18.071719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:57:18.075061 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:57:18.075348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:57:18.080662 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 04:57:18.098925 systemd[1]: Finished ensure-sysext.service. Nov 4 04:57:18.102263 augenrules[1488]: No rules Nov 4 04:57:18.103549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:18.103727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:18.105182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:57:18.108290 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:57:18.113269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:57:18.116180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:57:18.118263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:18.118312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:18.126088 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 04:57:18.128040 systemd-udevd[1477]: Using default interface naming scheme 'v257'. Nov 4 04:57:18.128156 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:18.128865 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:57:18.129315 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:57:18.131115 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 04:57:18.131470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:57:18.131696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:57:18.132994 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:57:18.133216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:57:18.136667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:57:18.137280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:57:18.143762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:57:18.143820 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 04:57:18.146485 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:57:18.146949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:57:18.150730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:57:18.167374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:57:18.176033 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:57:18.293782 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 04:57:18.312549 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 04:57:18.315001 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 04:57:18.332994 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 04:57:18.336556 systemd-networkd[1516]: lo: Link UP Nov 4 04:57:18.336909 systemd-networkd[1516]: lo: Gained carrier Nov 4 04:57:18.341053 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:57:18.344436 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:57:18.344600 systemd[1]: Reached target network.target - Network. Nov 4 04:57:18.350255 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:57:18.352194 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 04:57:18.356252 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:57:18.356965 systemd-networkd[1516]: eth0: Link UP Nov 4 04:57:18.357354 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 04:57:18.360269 systemd-networkd[1516]: eth0: Gained carrier Nov 4 04:57:18.360386 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:57:18.417890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 04:57:18.420537 systemd-networkd[1516]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 04:57:18.421699 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. Nov 4 04:57:19.445214 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 04:57:19.445271 systemd-timesyncd[1498]: Initial clock synchronization to Tue 2025-11-04 04:57:19.445133 UTC. Nov 4 04:57:19.445841 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 04:57:19.448704 systemd-resolved[1304]: Clock change detected. Flushing caches. Nov 4 04:57:19.468483 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 04:57:19.472672 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 04:57:19.478654 kernel: ACPI: button: Power Button [PWRF] Nov 4 04:57:19.489295 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 04:57:19.500173 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 04:57:19.503447 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 04:57:19.669237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:57:19.703932 kernel: kvm_amd: TSC scaling supported Nov 4 04:57:19.703994 kernel: kvm_amd: Nested Virtualization enabled Nov 4 04:57:19.704039 kernel: kvm_amd: Nested Paging enabled Nov 4 04:57:19.706628 kernel: kvm_amd: LBR virtualization supported Nov 4 04:57:19.706668 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 04:57:19.707729 kernel: kvm_amd: Virtual GIF supported Nov 4 04:57:19.741656 kernel: EDAC MC: Ver: 3.0.0 Nov 4 04:57:19.751642 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 04:57:19.761095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 04:57:19.764085 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 04:57:19.794726 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 04:57:19.848143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:19.852841 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:57:19.854919 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 04:57:19.857155 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 04:57:19.859484 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 04:57:19.861864 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 04:57:19.863929 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 04:57:19.866236 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 04:57:19.868512 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 04:57:19.868562 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:57:19.870219 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:57:19.874002 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 04:57:19.877798 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 04:57:19.881835 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 04:57:19.884210 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 04:57:19.886445 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 04:57:19.891425 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 04:57:19.893710 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 04:57:19.896521 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 04:57:19.899261 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:57:19.900941 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:57:19.902673 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:57:19.902716 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:57:19.904395 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 04:57:19.907594 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 04:57:19.910299 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 04:57:19.913857 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 04:57:19.917750 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 04:57:19.919469 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 04:57:19.921948 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 04:57:19.923972 jq[1577]: false Nov 4 04:57:19.942155 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 4 04:57:19.925366 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 04:57:19.945433 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 4 04:57:19.929704 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 04:57:19.933651 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 04:57:19.939631 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 04:57:19.946958 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 04:57:19.950657 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 04:57:19.951668 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 4 04:57:19.951662 oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 4 04:57:19.951882 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:57:19.951882 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 4 04:57:19.951687 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:57:19.952791 extend-filesystems[1578]: Found /dev/vda6 Nov 4 04:57:19.951756 oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 4 04:57:19.954208 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 04:57:19.955371 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 04:57:19.958258 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 4 04:57:19.958237 oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 4 04:57:19.960687 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:57:19.960434 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 04:57:19.960806 extend-filesystems[1578]: Found /dev/vda9 Nov 4 04:57:19.958721 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:57:19.963905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 04:57:19.964980 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 04:57:19.965252 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 04:57:19.966635 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 04:57:19.966980 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 04:57:19.968376 extend-filesystems[1578]: Checking size of /dev/vda9 Nov 4 04:57:19.976840 jq[1592]: true Nov 4 04:57:19.978272 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 04:57:19.978957 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 04:57:19.988035 extend-filesystems[1578]: Resized partition /dev/vda9 Nov 4 04:57:19.997022 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 04:57:19.997317 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 04:57:20.000401 extend-filesystems[1615]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 04:57:20.011323 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 04:57:20.015745 update_engine[1591]: I20251104 04:57:20.015653 1591 main.cc:92] Flatcar Update Engine starting Nov 4 04:57:20.021873 jq[1609]: true Nov 4 04:57:20.027530 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 04:57:20.029838 tar[1600]: linux-amd64/LICENSE Nov 4 04:57:20.029838 tar[1600]: linux-amd64/helm Nov 4 04:57:20.052644 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 04:57:20.055439 dbus-daemon[1575]: [system] SELinux support is enabled Nov 4 04:57:20.055844 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 04:57:20.088299 update_engine[1591]: I20251104 04:57:20.076735 1591 update_check_scheduler.cc:74] Next update check in 3m45s Nov 4 04:57:20.061851 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 04:57:20.061887 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 04:57:20.064427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 04:57:20.064452 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 04:57:20.076602 systemd[1]: Started update-engine.service - Update Engine. Nov 4 04:57:20.084811 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 04:57:20.091242 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 04:57:20.091242 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 04:57:20.091242 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 04:57:20.090280 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 04:57:20.102991 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Nov 4 04:57:20.090586 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 04:57:20.124970 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:57:20.124439 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 04:57:20.132014 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 04:57:20.159291 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 04:57:20.185158 systemd-logind[1587]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 04:57:20.185193 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 04:57:20.185943 systemd-logind[1587]: New seat seat0. Nov 4 04:57:20.187180 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 04:57:20.276378 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 04:57:20.285823 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 04:57:20.289744 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 04:57:20.292984 systemd[1]: Started sshd@0-10.0.0.56:22-10.0.0.1:47786.service - OpenSSH per-connection server daemon (10.0.0.1:47786). Nov 4 04:57:20.378934 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 04:57:20.379341 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 04:57:20.387028 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 04:57:20.452887 systemd-networkd[1516]: eth0: Gained IPv6LL Nov 4 04:57:20.462301 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 04:57:20.465235 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 04:57:20.469883 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 04:57:20.475489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:57:20.483686 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 04:57:20.495485 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 04:57:20.516434 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 04:57:20.525995 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 04:57:20.528313 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 04:57:20.571537 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 04:57:20.571907 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 04:57:20.574895 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 04:57:20.578351 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 04:57:20.649267 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 47786 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:20.652422 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:20.668417 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 04:57:20.673900 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 04:57:20.700858 containerd[1611]: time="2025-11-04T04:57:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 04:57:20.705354 containerd[1611]: time="2025-11-04T04:57:20.704573517Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 04:57:20.718571 systemd-logind[1587]: New session 1 of user core. Nov 4 04:57:20.733508 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 04:57:20.740049 containerd[1611]: time="2025-11-04T04:57:20.739987839Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.804µs" Nov 4 04:57:20.740177 containerd[1611]: time="2025-11-04T04:57:20.740154652Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 04:57:20.740302 containerd[1611]: time="2025-11-04T04:57:20.740282772Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 04:57:20.740380 containerd[1611]: time="2025-11-04T04:57:20.740363293Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 04:57:20.740725 containerd[1611]: time="2025-11-04T04:57:20.740698922Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 04:57:20.740815 containerd[1611]: time="2025-11-04T04:57:20.740793610Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:57:20.740965 containerd[1611]: time="2025-11-04T04:57:20.740941738Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:57:20.741050 containerd[1611]: time="2025-11-04T04:57:20.741031496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:57:20.741445 containerd[1611]: time="2025-11-04T04:57:20.741418461Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:57:20.741518 containerd[1611]: time="2025-11-04T04:57:20.741500425Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:57:20.741588 containerd[1611]: time="2025-11-04T04:57:20.741569625Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:57:20.741603 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 04:57:20.741767 containerd[1611]: time="2025-11-04T04:57:20.741741447Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:57:20.742165 containerd[1611]: time="2025-11-04T04:57:20.742136858Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:57:20.742240 containerd[1611]: time="2025-11-04T04:57:20.742223170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 04:57:20.742427 containerd[1611]: time="2025-11-04T04:57:20.742394231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 04:57:20.742806 containerd[1611]: time="2025-11-04T04:57:20.742781597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:57:20.742913 containerd[1611]: time="2025-11-04T04:57:20.742889690Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:57:20.742980 containerd[1611]: time="2025-11-04T04:57:20.742964210Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 04:57:20.743108 containerd[1611]: time="2025-11-04T04:57:20.743088232Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 04:57:20.743687 containerd[1611]: time="2025-11-04T04:57:20.743662028Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 04:57:20.743836 containerd[1611]: time="2025-11-04T04:57:20.743815796Z" level=info msg="metadata content store policy set" policy=shared Nov 4 04:57:20.753846 containerd[1611]: time="2025-11-04T04:57:20.753784568Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 04:57:20.754420 containerd[1611]: time="2025-11-04T04:57:20.754300856Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:57:20.754849 containerd[1611]: time="2025-11-04T04:57:20.754762441Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:57:20.754849 containerd[1611]: time="2025-11-04T04:57:20.754792408Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 04:57:20.754975 containerd[1611]: time="2025-11-04T04:57:20.754813537Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 04:57:20.755072 containerd[1611]: time="2025-11-04T04:57:20.755052846Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 04:57:20.755211 containerd[1611]: time="2025-11-04T04:57:20.755132034Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 04:57:20.755211 containerd[1611]: time="2025-11-04T04:57:20.755149116Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 04:57:20.755211 containerd[1611]: time="2025-11-04T04:57:20.755164946Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 04:57:20.755323 containerd[1611]: time="2025-11-04T04:57:20.755181587Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 04:57:20.755468 containerd[1611]: time="2025-11-04T04:57:20.755399316Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 04:57:20.755468 containerd[1611]: time="2025-11-04T04:57:20.755431416Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 04:57:20.755568 containerd[1611]: time="2025-11-04T04:57:20.755444490Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 04:57:20.755719 containerd[1611]: time="2025-11-04T04:57:20.755658662Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 04:57:20.756041 containerd[1611]: time="2025-11-04T04:57:20.756000633Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 04:57:20.756178 containerd[1611]: time="2025-11-04T04:57:20.756120097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 04:57:20.756178 containerd[1611]: time="2025-11-04T04:57:20.756150594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 04:57:20.756334 containerd[1611]: time="2025-11-04T04:57:20.756273795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 04:57:20.756334 containerd[1611]: time="2025-11-04T04:57:20.756295235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 04:57:20.756334 containerd[1611]: time="2025-11-04T04:57:20.756308530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 04:57:20.756486 containerd[1611]: time="2025-11-04T04:57:20.756465064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756538902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756558299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756573828Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756587403Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756636806Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756737505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756759346Z" level=info msg="Start snapshots syncer" Nov 4 04:57:20.757012 containerd[1611]: time="2025-11-04T04:57:20.756808708Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 04:57:20.759473 containerd[1611]: time="2025-11-04T04:57:20.759392833Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 04:57:20.759863 containerd[1611]: time="2025-11-04T04:57:20.759820816Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 04:57:20.760161 containerd[1611]: time="2025-11-04T04:57:20.760118374Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 04:57:20.760453 containerd[1611]: time="2025-11-04T04:57:20.760429477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 04:57:20.760566 containerd[1611]: time="2025-11-04T04:57:20.760533462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 04:57:20.763603 containerd[1611]: time="2025-11-04T04:57:20.763575185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 04:57:20.763786 containerd[1611]: time="2025-11-04T04:57:20.763711631Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 04:57:20.764956 containerd[1611]: time="2025-11-04T04:57:20.763856533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 04:57:20.764758 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 04:57:20.770595 containerd[1611]: time="2025-11-04T04:57:20.770543972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 04:57:20.770742 containerd[1611]: time="2025-11-04T04:57:20.770725763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 04:57:20.770833 containerd[1611]: time="2025-11-04T04:57:20.770817795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 04:57:20.770895 containerd[1611]: time="2025-11-04T04:57:20.770881565Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 04:57:20.771014 containerd[1611]: time="2025-11-04T04:57:20.770998364Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:57:20.771195 containerd[1611]: time="2025-11-04T04:57:20.771175706Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:57:20.771359 containerd[1611]: time="2025-11-04T04:57:20.771340425Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:57:20.771440 containerd[1611]: time="2025-11-04T04:57:20.771423260Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:57:20.771494 containerd[1611]: time="2025-11-04T04:57:20.771481049Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 04:57:20.771583 containerd[1611]: time="2025-11-04T04:57:20.771564575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 04:57:20.771806 containerd[1611]: time="2025-11-04T04:57:20.771649565Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 04:57:20.771806 containerd[1611]: time="2025-11-04T04:57:20.771685733Z" level=info msg="runtime interface created" Nov 4 04:57:20.771806 containerd[1611]: time="2025-11-04T04:57:20.771692736Z" level=info msg="created NRI interface" Nov 4 04:57:20.771806 containerd[1611]: time="2025-11-04T04:57:20.771703616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 04:57:20.771806 containerd[1611]: time="2025-11-04T04:57:20.771725036Z" level=info msg="Connect containerd service" Nov 4 04:57:20.771806 containerd[1611]: time="2025-11-04T04:57:20.771758790Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 04:57:20.773629 containerd[1611]: time="2025-11-04T04:57:20.773573392Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 04:57:20.807675 systemd-logind[1587]: New session c1 of user core. Nov 4 04:57:21.022092 systemd[1699]: Queued start job for default target default.target. Nov 4 04:57:21.054844 systemd[1699]: Created slice app.slice - User Application Slice. Nov 4 04:57:21.054888 systemd[1699]: Reached target paths.target - Paths. Nov 4 04:57:21.054959 systemd[1699]: Reached target timers.target - Timers. Nov 4 04:57:21.057140 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 04:57:21.089241 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 04:57:21.089458 systemd[1699]: Reached target sockets.target - Sockets. Nov 4 04:57:21.090507 systemd[1699]: Reached target basic.target - Basic System. Nov 4 04:57:21.090625 systemd[1699]: Reached target default.target - Main User Target. Nov 4 04:57:21.090680 systemd[1699]: Startup finished in 267ms. Nov 4 04:57:21.090891 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 04:57:21.109690 containerd[1611]: time="2025-11-04T04:57:21.109563503Z" level=info msg="Start subscribing containerd event" Nov 4 04:57:21.110061 containerd[1611]: time="2025-11-04T04:57:21.109915734Z" level=info msg="Start recovering state" Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110145274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110284235Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110357051Z" level=info msg="Start event monitor" Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110416012Z" level=info msg="Start cni network conf syncer for default" Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110486875Z" level=info msg="Start streaming server" Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110508645Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110530196Z" level=info msg="runtime interface starting up..." Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110571724Z" level=info msg="starting plugins..." Nov 4 04:57:21.110634 containerd[1611]: time="2025-11-04T04:57:21.110601750Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 04:57:21.110948 containerd[1611]: time="2025-11-04T04:57:21.110912352Z" level=info msg="containerd successfully booted in 0.411118s" Nov 4 04:57:21.176208 tar[1600]: linux-amd64/README.md Nov 4 04:57:21.176958 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 04:57:21.179677 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 04:57:21.209980 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 04:57:21.224757 systemd[1]: Started sshd@1-10.0.0.56:22-10.0.0.1:47790.service - OpenSSH per-connection server daemon (10.0.0.1:47790). Nov 4 04:57:21.310261 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 47790 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:21.312557 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:21.333950 systemd-logind[1587]: New session 2 of user core. Nov 4 04:57:21.347765 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 04:57:21.364805 sshd[1730]: Connection closed by 10.0.0.1 port 47790 Nov 4 04:57:21.365455 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:21.381725 systemd[1]: sshd@1-10.0.0.56:22-10.0.0.1:47790.service: Deactivated successfully. Nov 4 04:57:21.384238 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 04:57:21.385418 systemd-logind[1587]: Session 2 logged out. Waiting for processes to exit. Nov 4 04:57:21.390014 systemd[1]: Started sshd@2-10.0.0.56:22-10.0.0.1:47800.service - OpenSSH per-connection server daemon (10.0.0.1:47800). Nov 4 04:57:21.393159 systemd-logind[1587]: Removed session 2. Nov 4 04:57:21.461778 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 47800 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:21.463556 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:21.468983 systemd-logind[1587]: New session 3 of user core. Nov 4 04:57:21.483916 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 04:57:21.505039 sshd[1739]: Connection closed by 10.0.0.1 port 47800 Nov 4 04:57:21.505414 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:21.510471 systemd[1]: sshd@2-10.0.0.56:22-10.0.0.1:47800.service: Deactivated successfully. Nov 4 04:57:21.512666 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 04:57:21.513385 systemd-logind[1587]: Session 3 logged out. Waiting for processes to exit. Nov 4 04:57:21.514956 systemd-logind[1587]: Removed session 3. Nov 4 04:57:22.074604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:57:22.077470 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 04:57:22.079574 systemd[1]: Startup finished in 3.396s (kernel) + 9.353s (initrd) + 5.795s (userspace) = 18.544s. Nov 4 04:57:22.094177 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:57:22.692446 kubelet[1749]: E1104 04:57:22.692364 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:57:22.696861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:57:22.697085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:57:22.697510 systemd[1]: kubelet.service: Consumed 1.742s CPU time, 266.8M memory peak. Nov 4 04:57:31.536294 systemd[1]: Started sshd@3-10.0.0.56:22-10.0.0.1:37656.service - OpenSSH per-connection server daemon (10.0.0.1:37656). Nov 4 04:57:31.598755 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 37656 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:31.600874 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:31.606868 systemd-logind[1587]: New session 4 of user core. Nov 4 04:57:31.619861 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 04:57:31.634314 sshd[1765]: Connection closed by 10.0.0.1 port 37656 Nov 4 04:57:31.634747 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:31.658245 systemd[1]: sshd@3-10.0.0.56:22-10.0.0.1:37656.service: Deactivated successfully. Nov 4 04:57:31.660507 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 04:57:31.661293 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Nov 4 04:57:31.664789 systemd[1]: Started sshd@4-10.0.0.56:22-10.0.0.1:37666.service - OpenSSH per-connection server daemon (10.0.0.1:37666). Nov 4 04:57:31.665643 systemd-logind[1587]: Removed session 4. Nov 4 04:57:31.732816 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 37666 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:31.734793 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:31.744419 systemd-logind[1587]: New session 5 of user core. Nov 4 04:57:31.760379 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 04:57:31.771635 sshd[1774]: Connection closed by 10.0.0.1 port 37666 Nov 4 04:57:31.772043 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:31.782869 systemd[1]: sshd@4-10.0.0.56:22-10.0.0.1:37666.service: Deactivated successfully. Nov 4 04:57:31.784996 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 04:57:31.785893 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Nov 4 04:57:31.789052 systemd[1]: Started sshd@5-10.0.0.56:22-10.0.0.1:37682.service - OpenSSH per-connection server daemon (10.0.0.1:37682). Nov 4 04:57:31.789834 systemd-logind[1587]: Removed session 5. Nov 4 04:57:31.855107 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 37682 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:31.857021 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:31.862029 systemd-logind[1587]: New session 6 of user core. Nov 4 04:57:31.871761 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 04:57:31.886528 sshd[1784]: Connection closed by 10.0.0.1 port 37682 Nov 4 04:57:31.886935 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:31.900419 systemd[1]: sshd@5-10.0.0.56:22-10.0.0.1:37682.service: Deactivated successfully. Nov 4 04:57:31.902493 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 04:57:31.903324 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Nov 4 04:57:31.906337 systemd[1]: Started sshd@6-10.0.0.56:22-10.0.0.1:37698.service - OpenSSH per-connection server daemon (10.0.0.1:37698). Nov 4 04:57:31.906902 systemd-logind[1587]: Removed session 6. Nov 4 04:57:31.961135 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 37698 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:31.962763 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:31.967248 systemd-logind[1587]: New session 7 of user core. Nov 4 04:57:31.976758 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 04:57:31.999849 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 04:57:32.000182 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:57:32.020698 sudo[1795]: pam_unix(sudo:session): session closed for user root Nov 4 04:57:32.022828 sshd[1794]: Connection closed by 10.0.0.1 port 37698 Nov 4 04:57:32.023280 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:32.038191 systemd[1]: sshd@6-10.0.0.56:22-10.0.0.1:37698.service: Deactivated successfully. Nov 4 04:57:32.040903 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 04:57:32.042038 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Nov 4 04:57:32.046082 systemd[1]: Started sshd@7-10.0.0.56:22-10.0.0.1:37702.service - OpenSSH per-connection server daemon (10.0.0.1:37702). Nov 4 04:57:32.046917 systemd-logind[1587]: Removed session 7. Nov 4 04:57:32.098630 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 37702 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:32.100229 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:32.106056 systemd-logind[1587]: New session 8 of user core. Nov 4 04:57:32.124839 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 04:57:32.140662 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 04:57:32.140969 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:57:32.281284 sudo[1806]: pam_unix(sudo:session): session closed for user root Nov 4 04:57:32.291059 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 04:57:32.291472 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:57:32.307143 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:57:32.363201 augenrules[1828]: No rules Nov 4 04:57:32.365222 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:57:32.365653 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:57:32.367074 sudo[1805]: pam_unix(sudo:session): session closed for user root Nov 4 04:57:32.369091 sshd[1804]: Connection closed by 10.0.0.1 port 37702 Nov 4 04:57:32.369476 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:32.383925 systemd[1]: sshd@7-10.0.0.56:22-10.0.0.1:37702.service: Deactivated successfully. Nov 4 04:57:32.386123 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 04:57:32.387060 systemd-logind[1587]: Session 8 logged out. Waiting for processes to exit. Nov 4 04:57:32.390078 systemd[1]: Started sshd@8-10.0.0.56:22-10.0.0.1:37706.service - OpenSSH per-connection server daemon (10.0.0.1:37706). Nov 4 04:57:32.391069 systemd-logind[1587]: Removed session 8. Nov 4 04:57:32.456606 sshd[1838]: Accepted publickey for core from 10.0.0.1 port 37706 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:57:32.458693 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:57:32.464875 systemd-logind[1587]: New session 9 of user core. Nov 4 04:57:32.480042 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 04:57:32.500405 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 04:57:32.500927 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:57:32.948089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 04:57:32.951139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:57:33.455709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:57:33.473841 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:57:33.539201 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 04:57:33.550174 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 04:57:33.566703 kubelet[1869]: E1104 04:57:33.566584 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:57:33.576212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:57:33.576606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:57:33.577324 systemd[1]: kubelet.service: Consumed 402ms CPU time, 111.2M memory peak. Nov 4 04:57:34.222973 dockerd[1877]: time="2025-11-04T04:57:34.222894876Z" level=info msg="Starting up" Nov 4 04:57:34.223885 dockerd[1877]: time="2025-11-04T04:57:34.223858172Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 04:57:34.248821 dockerd[1877]: time="2025-11-04T04:57:34.248762358Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 04:57:35.230948 dockerd[1877]: time="2025-11-04T04:57:35.230871462Z" level=info msg="Loading containers: start." Nov 4 04:57:35.245683 kernel: Initializing XFRM netlink socket Nov 4 04:57:35.619084 systemd-networkd[1516]: docker0: Link UP Nov 4 04:57:35.626130 dockerd[1877]: time="2025-11-04T04:57:35.626064920Z" level=info msg="Loading containers: done." Nov 4 04:57:35.648919 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2925931665-merged.mount: Deactivated successfully. Nov 4 04:57:35.712118 dockerd[1877]: time="2025-11-04T04:57:35.712032389Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 04:57:35.712319 dockerd[1877]: time="2025-11-04T04:57:35.712168153Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 04:57:35.712319 dockerd[1877]: time="2025-11-04T04:57:35.712295542Z" level=info msg="Initializing buildkit" Nov 4 04:57:35.942939 dockerd[1877]: time="2025-11-04T04:57:35.942773608Z" level=info msg="Completed buildkit initialization" Nov 4 04:57:35.949955 dockerd[1877]: time="2025-11-04T04:57:35.949883790Z" level=info msg="Daemon has completed initialization" Nov 4 04:57:35.950099 dockerd[1877]: time="2025-11-04T04:57:35.950001601Z" level=info msg="API listen on /run/docker.sock" Nov 4 04:57:35.950364 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 04:57:36.939600 containerd[1611]: time="2025-11-04T04:57:36.939522512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 04:57:37.815917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566346005.mount: Deactivated successfully. Nov 4 04:57:38.994995 containerd[1611]: time="2025-11-04T04:57:38.990982359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:38.995593 containerd[1611]: time="2025-11-04T04:57:38.991789633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28442726" Nov 4 04:57:38.996905 containerd[1611]: time="2025-11-04T04:57:38.996867423Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:39.001686 containerd[1611]: time="2025-11-04T04:57:39.001644751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:39.002990 containerd[1611]: time="2025-11-04T04:57:39.002947083Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.063342968s" Nov 4 04:57:39.003036 containerd[1611]: time="2025-11-04T04:57:39.003008588Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 04:57:39.003890 containerd[1611]: time="2025-11-04T04:57:39.003850988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 04:57:41.493599 containerd[1611]: time="2025-11-04T04:57:41.493512412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:41.494829 containerd[1611]: time="2025-11-04T04:57:41.494757076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26012689" Nov 4 04:57:41.496241 containerd[1611]: time="2025-11-04T04:57:41.496191806Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:41.499223 containerd[1611]: time="2025-11-04T04:57:41.499102934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:41.500237 containerd[1611]: time="2025-11-04T04:57:41.500183350Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.496287409s" Nov 4 04:57:41.500237 containerd[1611]: time="2025-11-04T04:57:41.500235057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 04:57:41.501523 containerd[1611]: time="2025-11-04T04:57:41.501258586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 04:57:43.827205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 04:57:43.831248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:57:44.177817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:57:44.226375 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:57:44.384154 kubelet[2173]: E1104 04:57:44.383942 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:57:44.391588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:57:44.392155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:57:44.392658 systemd[1]: kubelet.service: Consumed 480ms CPU time, 110.8M memory peak. Nov 4 04:57:44.835336 containerd[1611]: time="2025-11-04T04:57:44.835228930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:44.837973 containerd[1611]: time="2025-11-04T04:57:44.837891352Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20150665" Nov 4 04:57:44.839707 containerd[1611]: time="2025-11-04T04:57:44.839669436Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:44.842739 containerd[1611]: time="2025-11-04T04:57:44.842684539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:44.843766 containerd[1611]: time="2025-11-04T04:57:44.843716865Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 3.342410268s" Nov 4 04:57:44.843766 containerd[1611]: time="2025-11-04T04:57:44.843745759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 04:57:44.844743 containerd[1611]: time="2025-11-04T04:57:44.844698455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 04:57:48.058168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187111488.mount: Deactivated successfully. Nov 4 04:57:48.731381 containerd[1611]: time="2025-11-04T04:57:48.731308658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:48.732272 containerd[1611]: time="2025-11-04T04:57:48.732237360Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31927129" Nov 4 04:57:48.733461 containerd[1611]: time="2025-11-04T04:57:48.733416921Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:48.735406 containerd[1611]: time="2025-11-04T04:57:48.735365194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:48.735979 containerd[1611]: time="2025-11-04T04:57:48.735932187Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.891197895s" Nov 4 04:57:48.736056 containerd[1611]: time="2025-11-04T04:57:48.735977873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 04:57:48.736783 containerd[1611]: time="2025-11-04T04:57:48.736528756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 04:57:49.825828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240560938.mount: Deactivated successfully. Nov 4 04:57:51.587517 containerd[1611]: time="2025-11-04T04:57:51.587413614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:51.588737 containerd[1611]: time="2025-11-04T04:57:51.588621719Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20213280" Nov 4 04:57:51.590093 containerd[1611]: time="2025-11-04T04:57:51.590021033Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:51.593762 containerd[1611]: time="2025-11-04T04:57:51.593672219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:51.594868 containerd[1611]: time="2025-11-04T04:57:51.594810263Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.858236864s" Nov 4 04:57:51.594935 containerd[1611]: time="2025-11-04T04:57:51.594866959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 04:57:51.595561 containerd[1611]: time="2025-11-04T04:57:51.595532738Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 04:57:52.286317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898450918.mount: Deactivated successfully. Nov 4 04:57:52.292847 containerd[1611]: time="2025-11-04T04:57:52.292773734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:57:52.293723 containerd[1611]: time="2025-11-04T04:57:52.293672609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:57:52.294919 containerd[1611]: time="2025-11-04T04:57:52.294857799Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:57:52.297164 containerd[1611]: time="2025-11-04T04:57:52.297135812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:57:52.297960 containerd[1611]: time="2025-11-04T04:57:52.297930982Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 702.364811ms" Nov 4 04:57:52.298171 containerd[1611]: time="2025-11-04T04:57:52.297964736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 04:57:52.298577 containerd[1611]: time="2025-11-04T04:57:52.298522494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 04:57:52.973193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605560763.mount: Deactivated successfully. Nov 4 04:57:54.596585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 04:57:54.599191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:57:54.895466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:57:54.902443 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:57:54.973848 kubelet[2307]: E1104 04:57:54.973755 2307 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:57:54.978665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:57:54.978901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:57:54.979700 systemd[1]: kubelet.service: Consumed 304ms CPU time, 110.4M memory peak. Nov 4 04:57:56.477380 containerd[1611]: time="2025-11-04T04:57:56.477318162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:56.478178 containerd[1611]: time="2025-11-04T04:57:56.478150919Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46258173" Nov 4 04:57:56.479527 containerd[1611]: time="2025-11-04T04:57:56.479492629Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:56.482572 containerd[1611]: time="2025-11-04T04:57:56.482543424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:56.483853 containerd[1611]: time="2025-11-04T04:57:56.483794251Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.185235267s" Nov 4 04:57:56.483898 containerd[1611]: time="2025-11-04T04:57:56.483854204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 04:58:01.407367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:01.407550 systemd[1]: kubelet.service: Consumed 304ms CPU time, 110.4M memory peak. Nov 4 04:58:01.410394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:01.439644 systemd[1]: Reload requested from client PID 2351 ('systemctl') (unit session-9.scope)... Nov 4 04:58:01.439674 systemd[1]: Reloading... Nov 4 04:58:01.787925 zram_generator::config[2395]: No configuration found. Nov 4 04:58:02.201380 systemd[1]: Reloading finished in 761 ms. Nov 4 04:58:02.272839 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 04:58:02.272980 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 04:58:02.273435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:02.273500 systemd[1]: kubelet.service: Consumed 204ms CPU time, 98.4M memory peak. Nov 4 04:58:02.275985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:02.596103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:02.615082 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:58:02.772395 kubelet[2443]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:02.772395 kubelet[2443]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:58:02.772395 kubelet[2443]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:02.772868 kubelet[2443]: I1104 04:58:02.772473 2443 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:58:03.087501 kubelet[2443]: I1104 04:58:03.087366 2443 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 04:58:03.087501 kubelet[2443]: I1104 04:58:03.087412 2443 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:58:03.087781 kubelet[2443]: I1104 04:58:03.087723 2443 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:58:03.131924 kubelet[2443]: E1104 04:58:03.131864 2443 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 04:58:03.137252 kubelet[2443]: I1104 04:58:03.137201 2443 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:58:03.151049 kubelet[2443]: I1104 04:58:03.150998 2443 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:58:03.157248 kubelet[2443]: I1104 04:58:03.157212 2443 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:58:03.157783 kubelet[2443]: I1104 04:58:03.157742 2443 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:58:03.158038 kubelet[2443]: I1104 04:58:03.157775 2443 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:58:03.158270 kubelet[2443]: I1104 04:58:03.158052 2443 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:58:03.158270 kubelet[2443]: I1104 04:58:03.158069 2443 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 04:58:03.158324 kubelet[2443]: I1104 04:58:03.158291 2443 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:03.161534 kubelet[2443]: I1104 04:58:03.161494 2443 kubelet.go:480] "Attempting to sync node with API server" Nov 4 04:58:03.161534 kubelet[2443]: I1104 04:58:03.161528 2443 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:58:03.161607 kubelet[2443]: I1104 04:58:03.161569 2443 kubelet.go:386] "Adding apiserver pod source" Nov 4 04:58:03.161607 kubelet[2443]: I1104 04:58:03.161596 2443 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:58:03.168418 kubelet[2443]: E1104 04:58:03.168338 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 04:58:03.170129 kubelet[2443]: E1104 04:58:03.170094 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 04:58:03.175149 kubelet[2443]: I1104 04:58:03.175114 2443 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:58:03.175699 kubelet[2443]: I1104 04:58:03.175667 2443 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:58:03.176434 kubelet[2443]: W1104 04:58:03.176395 2443 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 04:58:03.181052 kubelet[2443]: I1104 04:58:03.181021 2443 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:58:03.181104 kubelet[2443]: I1104 04:58:03.181095 2443 server.go:1289] "Started kubelet" Nov 4 04:58:03.196842 kubelet[2443]: I1104 04:58:03.196589 2443 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:58:03.198152 kubelet[2443]: I1104 04:58:03.198123 2443 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:58:03.202146 kubelet[2443]: I1104 04:58:03.202109 2443 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:58:03.203304 kubelet[2443]: I1104 04:58:03.203251 2443 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:58:03.203723 kubelet[2443]: E1104 04:58:03.203705 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.203989 kubelet[2443]: I1104 04:58:03.203976 2443 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:58:03.204598 kubelet[2443]: I1104 04:58:03.204574 2443 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:58:03.204830 kubelet[2443]: I1104 04:58:03.204807 2443 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:58:03.205767 kubelet[2443]: E1104 04:58:03.205714 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 04:58:03.206232 kubelet[2443]: I1104 04:58:03.206033 2443 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:58:03.207419 kubelet[2443]: I1104 04:58:03.207402 2443 server.go:317] "Adding debug handlers to kubelet server" Nov 4 04:58:03.208905 kubelet[2443]: E1104 04:58:03.208869 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="200ms" Nov 4 04:58:03.209420 kubelet[2443]: I1104 04:58:03.209383 2443 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:58:03.209685 kubelet[2443]: E1104 04:58:03.209455 2443 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:58:03.211450 kubelet[2443]: I1104 04:58:03.211413 2443 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:58:03.211831 kubelet[2443]: I1104 04:58:03.211756 2443 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:58:03.225867 kubelet[2443]: E1104 04:58:03.223910 2443 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874b4ecc2bb65c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 04:58:03.18104928 +0000 UTC m=+0.560137370,LastTimestamp:2025-11-04 04:58:03.18104928 +0000 UTC m=+0.560137370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 04:58:03.235396 kubelet[2443]: I1104 04:58:03.235356 2443 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:58:03.235396 kubelet[2443]: I1104 04:58:03.235385 2443 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:58:03.235505 kubelet[2443]: I1104 04:58:03.235407 2443 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:03.237094 kubelet[2443]: I1104 04:58:03.237045 2443 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 04:58:03.238911 kubelet[2443]: I1104 04:58:03.238778 2443 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 04:58:03.238911 kubelet[2443]: I1104 04:58:03.238832 2443 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 04:58:03.238911 kubelet[2443]: I1104 04:58:03.238866 2443 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:58:03.238911 kubelet[2443]: I1104 04:58:03.238886 2443 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 04:58:03.239060 kubelet[2443]: E1104 04:58:03.238952 2443 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:58:03.304552 kubelet[2443]: E1104 04:58:03.304463 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.340084 kubelet[2443]: E1104 04:58:03.339874 2443 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 04:58:03.405173 kubelet[2443]: E1104 04:58:03.405108 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.410012 kubelet[2443]: E1104 04:58:03.409956 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="400ms" Nov 4 04:58:03.506361 kubelet[2443]: E1104 04:58:03.506244 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.540739 kubelet[2443]: E1104 04:58:03.540660 2443 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 04:58:03.607220 kubelet[2443]: E1104 04:58:03.607151 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.708179 kubelet[2443]: E1104 04:58:03.708089 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.808777 kubelet[2443]: E1104 04:58:03.808688 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.811555 kubelet[2443]: E1104 04:58:03.811495 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="800ms" Nov 4 04:58:03.908955 kubelet[2443]: E1104 04:58:03.908756 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:03.941063 kubelet[2443]: E1104 04:58:03.940989 2443 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 04:58:04.009703 kubelet[2443]: E1104 04:58:04.009596 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:04.046406 kubelet[2443]: E1104 04:58:04.046320 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 04:58:04.066963 kubelet[2443]: E1104 04:58:04.066897 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 04:58:04.110836 kubelet[2443]: E1104 04:58:04.110770 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:04.177206 kubelet[2443]: I1104 04:58:04.176987 2443 policy_none.go:49] "None policy: Start" Nov 4 04:58:04.177206 kubelet[2443]: I1104 04:58:04.177085 2443 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:58:04.177206 kubelet[2443]: I1104 04:58:04.177121 2443 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:58:04.189607 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 04:58:04.207196 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 04:58:04.210915 kubelet[2443]: E1104 04:58:04.210882 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:58:04.212060 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 04:58:04.224194 kubelet[2443]: E1104 04:58:04.224135 2443 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:58:04.224680 kubelet[2443]: I1104 04:58:04.224487 2443 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:58:04.224680 kubelet[2443]: I1104 04:58:04.224521 2443 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:58:04.225040 kubelet[2443]: I1104 04:58:04.224994 2443 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:58:04.226344 kubelet[2443]: E1104 04:58:04.226283 2443 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:58:04.226583 kubelet[2443]: E1104 04:58:04.226361 2443 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 04:58:04.327419 kubelet[2443]: I1104 04:58:04.327327 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:58:04.327900 kubelet[2443]: E1104 04:58:04.327834 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 4 04:58:04.530165 kubelet[2443]: I1104 04:58:04.530032 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:58:04.530486 kubelet[2443]: E1104 04:58:04.530440 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 4 04:58:04.612710 kubelet[2443]: E1104 04:58:04.612590 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="1.6s" Nov 4 04:58:04.640653 kubelet[2443]: E1104 04:58:04.640568 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 04:58:04.645823 kubelet[2443]: E1104 04:58:04.645752 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 04:58:04.750597 kubelet[2443]: E1104 04:58:04.750362 2443 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874b4ecc2bb65c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 04:58:03.18104928 +0000 UTC m=+0.560137370,LastTimestamp:2025-11-04 04:58:03.18104928 +0000 UTC m=+0.560137370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 04:58:04.757295 systemd[1]: Created slice kubepods-burstable-pod43f14609ba15a5c9c8ffa1c78821b9e3.slice - libcontainer container kubepods-burstable-pod43f14609ba15a5c9c8ffa1c78821b9e3.slice. Nov 4 04:58:04.782777 kubelet[2443]: E1104 04:58:04.782662 2443 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:58:04.786449 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 4 04:58:04.788236 kubelet[2443]: E1104 04:58:04.788191 2443 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:58:04.813940 kubelet[2443]: I1104 04:58:04.813900 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:04.813940 kubelet[2443]: I1104 04:58:04.813942 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43f14609ba15a5c9c8ffa1c78821b9e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43f14609ba15a5c9c8ffa1c78821b9e3\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:04.814386 kubelet[2443]: I1104 04:58:04.813963 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:04.814386 kubelet[2443]: I1104 04:58:04.813981 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:04.814386 kubelet[2443]: I1104 04:58:04.814071 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:04.814386 kubelet[2443]: I1104 04:58:04.814116 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43f14609ba15a5c9c8ffa1c78821b9e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43f14609ba15a5c9c8ffa1c78821b9e3\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:04.814386 kubelet[2443]: I1104 04:58:04.814154 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43f14609ba15a5c9c8ffa1c78821b9e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43f14609ba15a5c9c8ffa1c78821b9e3\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:04.814500 kubelet[2443]: I1104 04:58:04.814172 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:04.814500 kubelet[2443]: I1104 04:58:04.814191 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:04.814955 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 4 04:58:04.816940 kubelet[2443]: E1104 04:58:04.816895 2443 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:58:04.932235 kubelet[2443]: I1104 04:58:04.932196 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:58:04.932586 kubelet[2443]: E1104 04:58:04.932551 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 4 04:58:05.084072 kubelet[2443]: E1104 04:58:05.083875 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:05.085038 containerd[1611]: time="2025-11-04T04:58:05.084756823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43f14609ba15a5c9c8ffa1c78821b9e3,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:05.100421 kubelet[2443]: E1104 04:58:05.100371 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:05.101049 containerd[1611]: time="2025-11-04T04:58:05.101009739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:05.118141 kubelet[2443]: E1104 04:58:05.118062 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:05.118802 containerd[1611]: time="2025-11-04T04:58:05.118754688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:05.288023 kubelet[2443]: E1104 04:58:05.287962 2443 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 04:58:05.536229 update_engine[1591]: I20251104 04:58:05.536075 1591 update_attempter.cc:509] Updating boot flags... Nov 4 04:58:05.571169 kubelet[2443]: E1104 04:58:05.571093 2443 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 04:58:05.735525 kubelet[2443]: I1104 04:58:05.735414 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:58:05.735951 kubelet[2443]: E1104 04:58:05.735908 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Nov 4 04:58:05.835789 containerd[1611]: time="2025-11-04T04:58:05.835570063Z" level=info msg="connecting to shim 34df91ee33760f003a2edd1b354100cd34206ee094007bd96fd25a8ae238cbb6" address="unix:///run/containerd/s/fdd3d76e90b9b380b8fa177e66cde30c6fe470508d307e365ce885c9b834502e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:05.837702 containerd[1611]: time="2025-11-04T04:58:05.837656957Z" level=info msg="connecting to shim 2a489c2d40d1a987f8594b6ac7dc13d151db410d2efafcd1dce05f32263ecaa2" address="unix:///run/containerd/s/cd78a857238e15e2f9f6fb7e793644424a4cf2132f519f5af42b3683e142959e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:05.879892 containerd[1611]: time="2025-11-04T04:58:05.879828419Z" level=info msg="connecting to shim 01b138379cfb4867d2eebee49450648ebe2d1b86717ff6c9ff72c6451a73059c" address="unix:///run/containerd/s/4487eb4c03ebe35580f522a5c6cd2dcddc91f107913293bf0a158985f9ee11fe" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:05.895812 systemd[1]: Started cri-containerd-34df91ee33760f003a2edd1b354100cd34206ee094007bd96fd25a8ae238cbb6.scope - libcontainer container 34df91ee33760f003a2edd1b354100cd34206ee094007bd96fd25a8ae238cbb6. Nov 4 04:58:05.966216 systemd[1]: Started cri-containerd-01b138379cfb4867d2eebee49450648ebe2d1b86717ff6c9ff72c6451a73059c.scope - libcontainer container 01b138379cfb4867d2eebee49450648ebe2d1b86717ff6c9ff72c6451a73059c. Nov 4 04:58:05.983191 systemd[1]: Started cri-containerd-2a489c2d40d1a987f8594b6ac7dc13d151db410d2efafcd1dce05f32263ecaa2.scope - libcontainer container 2a489c2d40d1a987f8594b6ac7dc13d151db410d2efafcd1dce05f32263ecaa2. Nov 4 04:58:06.133641 containerd[1611]: time="2025-11-04T04:58:06.133020342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43f14609ba15a5c9c8ffa1c78821b9e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"34df91ee33760f003a2edd1b354100cd34206ee094007bd96fd25a8ae238cbb6\"" Nov 4 04:58:06.134371 kubelet[2443]: E1104 04:58:06.134334 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:06.141649 containerd[1611]: time="2025-11-04T04:58:06.141584640Z" level=info msg="CreateContainer within sandbox \"34df91ee33760f003a2edd1b354100cd34206ee094007bd96fd25a8ae238cbb6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 04:58:06.144197 containerd[1611]: time="2025-11-04T04:58:06.144159201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"01b138379cfb4867d2eebee49450648ebe2d1b86717ff6c9ff72c6451a73059c\"" Nov 4 04:58:06.147659 kubelet[2443]: E1104 04:58:06.145966 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:06.152514 containerd[1611]: time="2025-11-04T04:58:06.152460293Z" level=info msg="CreateContainer within sandbox \"01b138379cfb4867d2eebee49450648ebe2d1b86717ff6c9ff72c6451a73059c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 04:58:06.155218 containerd[1611]: time="2025-11-04T04:58:06.155154730Z" level=info msg="Container 6c50c85940da57c77c7d6a7f9a8097ee2a5d3240762d083ddcfe7aa19fdfae6f: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:06.159073 containerd[1611]: time="2025-11-04T04:58:06.159034791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a489c2d40d1a987f8594b6ac7dc13d151db410d2efafcd1dce05f32263ecaa2\"" Nov 4 04:58:06.160031 kubelet[2443]: E1104 04:58:06.159981 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:06.164054 containerd[1611]: time="2025-11-04T04:58:06.163986472Z" level=info msg="Container a677f412bb7ebcdd1db4ab8e298d9fd7757fa3de18c5590488c303d858f3ee8d: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:06.164654 containerd[1611]: time="2025-11-04T04:58:06.164403027Z" level=info msg="CreateContainer within sandbox \"2a489c2d40d1a987f8594b6ac7dc13d151db410d2efafcd1dce05f32263ecaa2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 04:58:06.173760 containerd[1611]: time="2025-11-04T04:58:06.173708421Z" level=info msg="CreateContainer within sandbox \"34df91ee33760f003a2edd1b354100cd34206ee094007bd96fd25a8ae238cbb6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6c50c85940da57c77c7d6a7f9a8097ee2a5d3240762d083ddcfe7aa19fdfae6f\"" Nov 4 04:58:06.175665 containerd[1611]: time="2025-11-04T04:58:06.174365970Z" level=info msg="CreateContainer within sandbox \"01b138379cfb4867d2eebee49450648ebe2d1b86717ff6c9ff72c6451a73059c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a677f412bb7ebcdd1db4ab8e298d9fd7757fa3de18c5590488c303d858f3ee8d\"" Nov 4 04:58:06.175665 containerd[1611]: time="2025-11-04T04:58:06.174505003Z" level=info msg="StartContainer for \"6c50c85940da57c77c7d6a7f9a8097ee2a5d3240762d083ddcfe7aa19fdfae6f\"" Nov 4 04:58:06.176186 containerd[1611]: time="2025-11-04T04:58:06.176127019Z" level=info msg="StartContainer for \"a677f412bb7ebcdd1db4ab8e298d9fd7757fa3de18c5590488c303d858f3ee8d\"" Nov 4 04:58:06.176944 containerd[1611]: time="2025-11-04T04:58:06.176911607Z" level=info msg="connecting to shim 6c50c85940da57c77c7d6a7f9a8097ee2a5d3240762d083ddcfe7aa19fdfae6f" address="unix:///run/containerd/s/fdd3d76e90b9b380b8fa177e66cde30c6fe470508d307e365ce885c9b834502e" protocol=ttrpc version=3 Nov 4 04:58:06.178839 containerd[1611]: time="2025-11-04T04:58:06.178803162Z" level=info msg="connecting to shim a677f412bb7ebcdd1db4ab8e298d9fd7757fa3de18c5590488c303d858f3ee8d" address="unix:///run/containerd/s/4487eb4c03ebe35580f522a5c6cd2dcddc91f107913293bf0a158985f9ee11fe" protocol=ttrpc version=3 Nov 4 04:58:06.182818 containerd[1611]: time="2025-11-04T04:58:06.182759717Z" level=info msg="Container 810d7095c5681df5391875b715c2c0a82d4867bb9231cc617f433d9d235248c1: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:06.194958 containerd[1611]: time="2025-11-04T04:58:06.194900133Z" level=info msg="CreateContainer within sandbox \"2a489c2d40d1a987f8594b6ac7dc13d151db410d2efafcd1dce05f32263ecaa2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"810d7095c5681df5391875b715c2c0a82d4867bb9231cc617f433d9d235248c1\"" Nov 4 04:58:06.196275 containerd[1611]: time="2025-11-04T04:58:06.196180055Z" level=info msg="StartContainer for \"810d7095c5681df5391875b715c2c0a82d4867bb9231cc617f433d9d235248c1\"" Nov 4 04:58:06.200315 containerd[1611]: time="2025-11-04T04:58:06.200250976Z" level=info msg="connecting to shim 810d7095c5681df5391875b715c2c0a82d4867bb9231cc617f433d9d235248c1" address="unix:///run/containerd/s/cd78a857238e15e2f9f6fb7e793644424a4cf2132f519f5af42b3683e142959e" protocol=ttrpc version=3 Nov 4 04:58:06.212811 systemd[1]: Started cri-containerd-6c50c85940da57c77c7d6a7f9a8097ee2a5d3240762d083ddcfe7aa19fdfae6f.scope - libcontainer container 6c50c85940da57c77c7d6a7f9a8097ee2a5d3240762d083ddcfe7aa19fdfae6f. Nov 4 04:58:06.213559 kubelet[2443]: E1104 04:58:06.213519 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="3.2s" Nov 4 04:58:06.217076 systemd[1]: Started cri-containerd-a677f412bb7ebcdd1db4ab8e298d9fd7757fa3de18c5590488c303d858f3ee8d.scope - libcontainer container a677f412bb7ebcdd1db4ab8e298d9fd7757fa3de18c5590488c303d858f3ee8d. Nov 4 04:58:06.230858 systemd[1]: Started cri-containerd-810d7095c5681df5391875b715c2c0a82d4867bb9231cc617f433d9d235248c1.scope - libcontainer container 810d7095c5681df5391875b715c2c0a82d4867bb9231cc617f433d9d235248c1. Nov 4 04:58:06.331884 containerd[1611]: time="2025-11-04T04:58:06.331822231Z" level=info msg="StartContainer for \"810d7095c5681df5391875b715c2c0a82d4867bb9231cc617f433d9d235248c1\" returns successfully" Nov 4 04:58:06.332100 containerd[1611]: time="2025-11-04T04:58:06.332072602Z" level=info msg="StartContainer for \"6c50c85940da57c77c7d6a7f9a8097ee2a5d3240762d083ddcfe7aa19fdfae6f\" returns successfully" Nov 4 04:58:06.343753 containerd[1611]: time="2025-11-04T04:58:06.343698209Z" level=info msg="StartContainer for \"a677f412bb7ebcdd1db4ab8e298d9fd7757fa3de18c5590488c303d858f3ee8d\" returns successfully" Nov 4 04:58:07.283113 kubelet[2443]: E1104 04:58:07.268347 2443 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:58:07.283113 kubelet[2443]: E1104 04:58:07.268529 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:07.283113 kubelet[2443]: E1104 04:58:07.273009 2443 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:58:07.283113 kubelet[2443]: E1104 04:58:07.273114 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:07.283113 kubelet[2443]: E1104 04:58:07.276234 2443 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:58:07.283113 kubelet[2443]: E1104 04:58:07.276325 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:07.338366 kubelet[2443]: I1104 04:58:07.338313 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:58:08.172075 kubelet[2443]: I1104 04:58:08.172008 2443 apiserver.go:52] "Watching apiserver" Nov 4 04:58:08.205150 kubelet[2443]: I1104 04:58:08.205102 2443 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:58:08.237263 kubelet[2443]: I1104 04:58:08.236933 2443 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 04:58:08.237263 kubelet[2443]: E1104 04:58:08.237006 2443 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 04:58:08.278838 kubelet[2443]: I1104 04:58:08.278793 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:08.279185 kubelet[2443]: I1104 04:58:08.279175 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:08.279703 kubelet[2443]: I1104 04:58:08.279675 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:08.289962 kubelet[2443]: E1104 04:58:08.289918 2443 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:08.290998 kubelet[2443]: E1104 04:58:08.290845 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:08.291933 kubelet[2443]: E1104 04:58:08.291878 2443 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:08.292147 kubelet[2443]: E1104 04:58:08.292066 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:08.292346 kubelet[2443]: E1104 04:58:08.292303 2443 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:08.292539 kubelet[2443]: E1104 04:58:08.292510 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:08.307140 kubelet[2443]: I1104 04:58:08.306685 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:08.308857 kubelet[2443]: E1104 04:58:08.308832 2443 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:08.308966 kubelet[2443]: I1104 04:58:08.308953 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:08.311446 kubelet[2443]: E1104 04:58:08.311369 2443 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:08.311446 kubelet[2443]: I1104 04:58:08.311413 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:08.313078 kubelet[2443]: E1104 04:58:08.313015 2443 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:09.280655 kubelet[2443]: I1104 04:58:09.280589 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:09.281081 kubelet[2443]: I1104 04:58:09.281058 2443 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:09.286047 kubelet[2443]: E1104 04:58:09.285983 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:09.286207 kubelet[2443]: E1104 04:58:09.285983 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:10.222367 systemd[1]: Reload requested from client PID 2749 ('systemctl') (unit session-9.scope)... Nov 4 04:58:10.222387 systemd[1]: Reloading... Nov 4 04:58:10.282593 kubelet[2443]: E1104 04:58:10.282550 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:10.283191 kubelet[2443]: E1104 04:58:10.282792 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:10.315660 zram_generator::config[2796]: No configuration found. Nov 4 04:58:10.664988 systemd[1]: Reloading finished in 442 ms. Nov 4 04:58:10.703294 kubelet[2443]: I1104 04:58:10.703166 2443 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:58:10.703328 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:10.713338 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 04:58:10.713742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:10.713813 systemd[1]: kubelet.service: Consumed 1.508s CPU time, 134.1M memory peak. Nov 4 04:58:10.716227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:10.995322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:11.011208 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:58:11.068634 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:11.068634 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:58:11.068634 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:11.069084 kubelet[2838]: I1104 04:58:11.068705 2838 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:58:11.077580 kubelet[2838]: I1104 04:58:11.077526 2838 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 04:58:11.077580 kubelet[2838]: I1104 04:58:11.077558 2838 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:58:11.077809 kubelet[2838]: I1104 04:58:11.077791 2838 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:58:11.079102 kubelet[2838]: I1104 04:58:11.079072 2838 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 04:58:11.081446 kubelet[2838]: I1104 04:58:11.081405 2838 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:58:11.087289 kubelet[2838]: I1104 04:58:11.087257 2838 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:58:11.092882 kubelet[2838]: I1104 04:58:11.092825 2838 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:58:11.093177 kubelet[2838]: I1104 04:58:11.093132 2838 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:58:11.093390 kubelet[2838]: I1104 04:58:11.093165 2838 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:58:11.093528 kubelet[2838]: I1104 04:58:11.093399 2838 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:58:11.093528 kubelet[2838]: I1104 04:58:11.093413 2838 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 04:58:11.093528 kubelet[2838]: I1104 04:58:11.093465 2838 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:11.093705 kubelet[2838]: I1104 04:58:11.093686 2838 kubelet.go:480] "Attempting to sync node with API server" Nov 4 04:58:11.093705 kubelet[2838]: I1104 04:58:11.093705 2838 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:58:11.093803 kubelet[2838]: I1104 04:58:11.093733 2838 kubelet.go:386] "Adding apiserver pod source" Nov 4 04:58:11.093803 kubelet[2838]: I1104 04:58:11.093757 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:58:11.096819 kubelet[2838]: I1104 04:58:11.096778 2838 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:58:11.097396 kubelet[2838]: I1104 04:58:11.097366 2838 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:58:11.101337 kubelet[2838]: I1104 04:58:11.101308 2838 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:58:11.101392 kubelet[2838]: I1104 04:58:11.101363 2838 server.go:1289] "Started kubelet" Nov 4 04:58:11.101686 kubelet[2838]: I1104 04:58:11.101656 2838 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:58:11.103685 kubelet[2838]: I1104 04:58:11.102715 2838 server.go:317] "Adding debug handlers to kubelet server" Nov 4 04:58:11.103685 kubelet[2838]: I1104 04:58:11.101673 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:58:11.103685 kubelet[2838]: I1104 04:58:11.103523 2838 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:58:11.103685 kubelet[2838]: I1104 04:58:11.103658 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:58:11.108567 kubelet[2838]: I1104 04:58:11.108406 2838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:58:11.111727 kubelet[2838]: I1104 04:58:11.111704 2838 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:58:11.112190 kubelet[2838]: I1104 04:58:11.112176 2838 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:58:11.112362 kubelet[2838]: I1104 04:58:11.112350 2838 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:58:11.113487 kubelet[2838]: I1104 04:58:11.113459 2838 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:58:11.113585 kubelet[2838]: I1104 04:58:11.113566 2838 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:58:11.114526 kubelet[2838]: I1104 04:58:11.114481 2838 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:58:11.127258 kubelet[2838]: I1104 04:58:11.127197 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 04:58:11.130576 kubelet[2838]: I1104 04:58:11.130552 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 04:58:11.130726 kubelet[2838]: I1104 04:58:11.130584 2838 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 04:58:11.130726 kubelet[2838]: I1104 04:58:11.130627 2838 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:58:11.130726 kubelet[2838]: I1104 04:58:11.130640 2838 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 04:58:11.130726 kubelet[2838]: E1104 04:58:11.130693 2838 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:58:11.156823 kubelet[2838]: I1104 04:58:11.156768 2838 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:58:11.156823 kubelet[2838]: I1104 04:58:11.156796 2838 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:58:11.156823 kubelet[2838]: I1104 04:58:11.156819 2838 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:11.157033 kubelet[2838]: I1104 04:58:11.156982 2838 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 04:58:11.157033 kubelet[2838]: I1104 04:58:11.156999 2838 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 04:58:11.157033 kubelet[2838]: I1104 04:58:11.157018 2838 policy_none.go:49] "None policy: Start" Nov 4 04:58:11.157033 kubelet[2838]: I1104 04:58:11.157028 2838 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:58:11.157125 kubelet[2838]: I1104 04:58:11.157039 2838 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:58:11.157153 kubelet[2838]: I1104 04:58:11.157144 2838 state_mem.go:75] "Updated machine memory state" Nov 4 04:58:11.161429 kubelet[2838]: E1104 04:58:11.161393 2838 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:58:11.161627 kubelet[2838]: I1104 04:58:11.161587 2838 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:58:11.161683 kubelet[2838]: I1104 04:58:11.161604 2838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:58:11.161821 kubelet[2838]: I1104 04:58:11.161797 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:58:11.162528 kubelet[2838]: E1104 04:58:11.162499 2838 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:58:11.231591 kubelet[2838]: I1104 04:58:11.231534 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:11.231798 kubelet[2838]: I1104 04:58:11.231534 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:11.231798 kubelet[2838]: I1104 04:58:11.231730 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:11.269037 kubelet[2838]: I1104 04:58:11.268893 2838 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:58:11.313996 kubelet[2838]: I1104 04:58:11.313930 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43f14609ba15a5c9c8ffa1c78821b9e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43f14609ba15a5c9c8ffa1c78821b9e3\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:11.314153 kubelet[2838]: I1104 04:58:11.314014 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43f14609ba15a5c9c8ffa1c78821b9e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43f14609ba15a5c9c8ffa1c78821b9e3\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:11.314153 kubelet[2838]: I1104 04:58:11.314073 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:11.314153 kubelet[2838]: I1104 04:58:11.314106 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:11.314153 kubelet[2838]: I1104 04:58:11.314136 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:11.314313 kubelet[2838]: I1104 04:58:11.314155 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43f14609ba15a5c9c8ffa1c78821b9e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43f14609ba15a5c9c8ffa1c78821b9e3\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:11.314313 kubelet[2838]: I1104 04:58:11.314212 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:11.314313 kubelet[2838]: I1104 04:58:11.314236 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:11.314313 kubelet[2838]: I1104 04:58:11.314268 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:58:11.650443 kubelet[2838]: E1104 04:58:11.650385 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:11.808702 kubelet[2838]: E1104 04:58:11.807161 2838 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 04:58:11.808702 kubelet[2838]: E1104 04:58:11.807699 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:11.808702 kubelet[2838]: E1104 04:58:11.808166 2838 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:11.808968 kubelet[2838]: E1104 04:58:11.808875 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:11.810686 kubelet[2838]: I1104 04:58:11.810601 2838 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 04:58:11.810857 kubelet[2838]: I1104 04:58:11.810744 2838 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 04:58:12.097059 kubelet[2838]: I1104 04:58:12.095520 2838 apiserver.go:52] "Watching apiserver" Nov 4 04:58:12.113762 kubelet[2838]: I1104 04:58:12.113686 2838 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:58:12.142102 kubelet[2838]: I1104 04:58:12.142055 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:12.142326 kubelet[2838]: E1104 04:58:12.142188 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:12.142561 kubelet[2838]: E1104 04:58:12.142507 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:12.545326 kubelet[2838]: I1104 04:58:12.545112 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.545082984 podStartE2EDuration="3.545082984s" podCreationTimestamp="2025-11-04 04:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:12.544656421 +0000 UTC m=+1.528188589" watchObservedRunningTime="2025-11-04 04:58:12.545082984 +0000 UTC m=+1.528615131" Nov 4 04:58:12.547447 kubelet[2838]: E1104 04:58:12.547220 2838 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 04:58:12.547557 kubelet[2838]: E1104 04:58:12.547540 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:12.579494 kubelet[2838]: I1104 04:58:12.579308 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5792861519999999 podStartE2EDuration="1.579286152s" podCreationTimestamp="2025-11-04 04:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:12.566088774 +0000 UTC m=+1.549620921" watchObservedRunningTime="2025-11-04 04:58:12.579286152 +0000 UTC m=+1.562818299" Nov 4 04:58:12.590524 kubelet[2838]: I1104 04:58:12.590277 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.590252003 podStartE2EDuration="3.590252003s" podCreationTimestamp="2025-11-04 04:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:12.580199621 +0000 UTC m=+1.563731768" watchObservedRunningTime="2025-11-04 04:58:12.590252003 +0000 UTC m=+1.573784150" Nov 4 04:58:13.146686 kubelet[2838]: E1104 04:58:13.146472 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:13.146686 kubelet[2838]: E1104 04:58:13.146500 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:14.145982 kubelet[2838]: E1104 04:58:14.145928 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:14.282254 kubelet[2838]: E1104 04:58:14.282198 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:16.129333 kubelet[2838]: E1104 04:58:16.129121 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:16.152193 kubelet[2838]: E1104 04:58:16.152074 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:17.154520 kubelet[2838]: E1104 04:58:17.154230 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:17.315086 kubelet[2838]: I1104 04:58:17.315039 2838 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 04:58:17.315421 containerd[1611]: time="2025-11-04T04:58:17.315372005Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 04:58:17.315927 kubelet[2838]: I1104 04:58:17.315580 2838 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 04:58:18.151798 systemd[1]: Created slice kubepods-besteffort-podc1411ec8_2b1c_483c_8163_50bfed8c1fb9.slice - libcontainer container kubepods-besteffort-podc1411ec8_2b1c_483c_8163_50bfed8c1fb9.slice. Nov 4 04:58:18.158192 kubelet[2838]: I1104 04:58:18.158144 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1411ec8-2b1c-483c-8163-50bfed8c1fb9-lib-modules\") pod \"kube-proxy-rvmzn\" (UID: \"c1411ec8-2b1c-483c-8163-50bfed8c1fb9\") " pod="kube-system/kube-proxy-rvmzn" Nov 4 04:58:18.158192 kubelet[2838]: I1104 04:58:18.158185 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4bnf\" (UniqueName: \"kubernetes.io/projected/c1411ec8-2b1c-483c-8163-50bfed8c1fb9-kube-api-access-h4bnf\") pod \"kube-proxy-rvmzn\" (UID: \"c1411ec8-2b1c-483c-8163-50bfed8c1fb9\") " pod="kube-system/kube-proxy-rvmzn" Nov 4 04:58:18.158633 kubelet[2838]: I1104 04:58:18.158209 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1411ec8-2b1c-483c-8163-50bfed8c1fb9-kube-proxy\") pod \"kube-proxy-rvmzn\" (UID: \"c1411ec8-2b1c-483c-8163-50bfed8c1fb9\") " pod="kube-system/kube-proxy-rvmzn" Nov 4 04:58:18.158633 kubelet[2838]: I1104 04:58:18.158223 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1411ec8-2b1c-483c-8163-50bfed8c1fb9-xtables-lock\") pod \"kube-proxy-rvmzn\" (UID: \"c1411ec8-2b1c-483c-8163-50bfed8c1fb9\") " pod="kube-system/kube-proxy-rvmzn" Nov 4 04:58:18.459536 kubelet[2838]: E1104 04:58:18.459334 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:18.460446 containerd[1611]: time="2025-11-04T04:58:18.460386651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvmzn,Uid:c1411ec8-2b1c-483c-8163-50bfed8c1fb9,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:18.522464 containerd[1611]: time="2025-11-04T04:58:18.522399752Z" level=info msg="connecting to shim fdc3b241b4551cf3eecc4b7bd699e522d76822ca68800f1c686c4366170b362c" address="unix:///run/containerd/s/79cfca276cd5b335ce68dac907700d509cfe3bfa0fc61db6366e9456cfc0c29c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:18.560564 kubelet[2838]: I1104 04:58:18.560514 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78dc2348-4e85-4c73-aa1e-1dc3b51256c8-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r8tdt\" (UID: \"78dc2348-4e85-4c73-aa1e-1dc3b51256c8\") " pod="tigera-operator/tigera-operator-7dcd859c48-r8tdt" Nov 4 04:58:18.560564 kubelet[2838]: I1104 04:58:18.560558 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnbd\" (UniqueName: \"kubernetes.io/projected/78dc2348-4e85-4c73-aa1e-1dc3b51256c8-kube-api-access-mqnbd\") pod \"tigera-operator-7dcd859c48-r8tdt\" (UID: \"78dc2348-4e85-4c73-aa1e-1dc3b51256c8\") " pod="tigera-operator/tigera-operator-7dcd859c48-r8tdt" Nov 4 04:58:18.561380 systemd[1]: Created slice kubepods-besteffort-pod78dc2348_4e85_4c73_aa1e_1dc3b51256c8.slice - libcontainer container kubepods-besteffort-pod78dc2348_4e85_4c73_aa1e_1dc3b51256c8.slice. Nov 4 04:58:18.596919 systemd[1]: Started cri-containerd-fdc3b241b4551cf3eecc4b7bd699e522d76822ca68800f1c686c4366170b362c.scope - libcontainer container fdc3b241b4551cf3eecc4b7bd699e522d76822ca68800f1c686c4366170b362c. Nov 4 04:58:18.629403 containerd[1611]: time="2025-11-04T04:58:18.629346497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvmzn,Uid:c1411ec8-2b1c-483c-8163-50bfed8c1fb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdc3b241b4551cf3eecc4b7bd699e522d76822ca68800f1c686c4366170b362c\"" Nov 4 04:58:18.630508 kubelet[2838]: E1104 04:58:18.630473 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:18.636458 containerd[1611]: time="2025-11-04T04:58:18.636382880Z" level=info msg="CreateContainer within sandbox \"fdc3b241b4551cf3eecc4b7bd699e522d76822ca68800f1c686c4366170b362c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 04:58:18.652219 containerd[1611]: time="2025-11-04T04:58:18.652139772Z" level=info msg="Container 8413128fd4a3ad43a9d1504ca99143bc84e2a6c212cf32455bd0ad81b0cee3bc: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:18.657157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228356055.mount: Deactivated successfully. Nov 4 04:58:18.664742 containerd[1611]: time="2025-11-04T04:58:18.664667105Z" level=info msg="CreateContainer within sandbox \"fdc3b241b4551cf3eecc4b7bd699e522d76822ca68800f1c686c4366170b362c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8413128fd4a3ad43a9d1504ca99143bc84e2a6c212cf32455bd0ad81b0cee3bc\"" Nov 4 04:58:18.665583 containerd[1611]: time="2025-11-04T04:58:18.665551297Z" level=info msg="StartContainer for \"8413128fd4a3ad43a9d1504ca99143bc84e2a6c212cf32455bd0ad81b0cee3bc\"" Nov 4 04:58:18.667411 containerd[1611]: time="2025-11-04T04:58:18.667378621Z" level=info msg="connecting to shim 8413128fd4a3ad43a9d1504ca99143bc84e2a6c212cf32455bd0ad81b0cee3bc" address="unix:///run/containerd/s/79cfca276cd5b335ce68dac907700d509cfe3bfa0fc61db6366e9456cfc0c29c" protocol=ttrpc version=3 Nov 4 04:58:18.696990 systemd[1]: Started cri-containerd-8413128fd4a3ad43a9d1504ca99143bc84e2a6c212cf32455bd0ad81b0cee3bc.scope - libcontainer container 8413128fd4a3ad43a9d1504ca99143bc84e2a6c212cf32455bd0ad81b0cee3bc. Nov 4 04:58:18.752369 containerd[1611]: time="2025-11-04T04:58:18.752171483Z" level=info msg="StartContainer for \"8413128fd4a3ad43a9d1504ca99143bc84e2a6c212cf32455bd0ad81b0cee3bc\" returns successfully" Nov 4 04:58:18.868532 containerd[1611]: time="2025-11-04T04:58:18.868448562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r8tdt,Uid:78dc2348-4e85-4c73-aa1e-1dc3b51256c8,Namespace:tigera-operator,Attempt:0,}" Nov 4 04:58:18.894850 containerd[1611]: time="2025-11-04T04:58:18.894788893Z" level=info msg="connecting to shim 085e5de516cedb2652c40ae4ae4360031204054a587be26b92a893283fe95e2d" address="unix:///run/containerd/s/e0df2cad85163aaecf09d533d3819b6394d292f1c98b9d96a2ebf4956f43b3a6" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:18.921823 systemd[1]: Started cri-containerd-085e5de516cedb2652c40ae4ae4360031204054a587be26b92a893283fe95e2d.scope - libcontainer container 085e5de516cedb2652c40ae4ae4360031204054a587be26b92a893283fe95e2d. Nov 4 04:58:18.987466 containerd[1611]: time="2025-11-04T04:58:18.987417000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r8tdt,Uid:78dc2348-4e85-4c73-aa1e-1dc3b51256c8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"085e5de516cedb2652c40ae4ae4360031204054a587be26b92a893283fe95e2d\"" Nov 4 04:58:18.989437 containerd[1611]: time="2025-11-04T04:58:18.989161969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 04:58:19.161977 kubelet[2838]: E1104 04:58:19.161685 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:19.931024 kubelet[2838]: E1104 04:58:19.930933 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:20.167307 kubelet[2838]: E1104 04:58:20.167267 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:20.369416 kubelet[2838]: I1104 04:58:20.369337 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rvmzn" podStartSLOduration=2.3693184990000002 podStartE2EDuration="2.369318499s" podCreationTimestamp="2025-11-04 04:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:19.180199648 +0000 UTC m=+8.163731795" watchObservedRunningTime="2025-11-04 04:58:20.369318499 +0000 UTC m=+9.352850646" Nov 4 04:58:22.076514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416597658.mount: Deactivated successfully. Nov 4 04:58:22.545838 containerd[1611]: time="2025-11-04T04:58:22.545765711Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:22.546763 containerd[1611]: time="2025-11-04T04:58:22.546726135Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558945" Nov 4 04:58:22.548090 containerd[1611]: time="2025-11-04T04:58:22.548050522Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:22.550219 containerd[1611]: time="2025-11-04T04:58:22.550174572Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:22.550724 containerd[1611]: time="2025-11-04T04:58:22.550673269Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.561457839s" Nov 4 04:58:22.550772 containerd[1611]: time="2025-11-04T04:58:22.550723303Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 04:58:22.555591 containerd[1611]: time="2025-11-04T04:58:22.555546161Z" level=info msg="CreateContainer within sandbox \"085e5de516cedb2652c40ae4ae4360031204054a587be26b92a893283fe95e2d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 04:58:22.567190 containerd[1611]: time="2025-11-04T04:58:22.567141652Z" level=info msg="Container 6ec281a0f8f1f9f0170a599f7218f8d55bdd4115d65d1587d6c4c44eb18d9aa7: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:22.574282 containerd[1611]: time="2025-11-04T04:58:22.574241177Z" level=info msg="CreateContainer within sandbox \"085e5de516cedb2652c40ae4ae4360031204054a587be26b92a893283fe95e2d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6ec281a0f8f1f9f0170a599f7218f8d55bdd4115d65d1587d6c4c44eb18d9aa7\"" Nov 4 04:58:22.574912 containerd[1611]: time="2025-11-04T04:58:22.574878915Z" level=info msg="StartContainer for \"6ec281a0f8f1f9f0170a599f7218f8d55bdd4115d65d1587d6c4c44eb18d9aa7\"" Nov 4 04:58:22.576175 containerd[1611]: time="2025-11-04T04:58:22.576046548Z" level=info msg="connecting to shim 6ec281a0f8f1f9f0170a599f7218f8d55bdd4115d65d1587d6c4c44eb18d9aa7" address="unix:///run/containerd/s/e0df2cad85163aaecf09d533d3819b6394d292f1c98b9d96a2ebf4956f43b3a6" protocol=ttrpc version=3 Nov 4 04:58:22.603882 systemd[1]: Started cri-containerd-6ec281a0f8f1f9f0170a599f7218f8d55bdd4115d65d1587d6c4c44eb18d9aa7.scope - libcontainer container 6ec281a0f8f1f9f0170a599f7218f8d55bdd4115d65d1587d6c4c44eb18d9aa7. Nov 4 04:58:22.644459 containerd[1611]: time="2025-11-04T04:58:22.644395620Z" level=info msg="StartContainer for \"6ec281a0f8f1f9f0170a599f7218f8d55bdd4115d65d1587d6c4c44eb18d9aa7\" returns successfully" Nov 4 04:58:23.261240 kubelet[2838]: I1104 04:58:23.261118 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r8tdt" podStartSLOduration=1.698324741 podStartE2EDuration="5.261098282s" podCreationTimestamp="2025-11-04 04:58:18 +0000 UTC" firstStartedPulling="2025-11-04 04:58:18.988805159 +0000 UTC m=+7.972337316" lastFinishedPulling="2025-11-04 04:58:22.55157871 +0000 UTC m=+11.535110857" observedRunningTime="2025-11-04 04:58:23.260674477 +0000 UTC m=+12.244206624" watchObservedRunningTime="2025-11-04 04:58:23.261098282 +0000 UTC m=+12.244630429" Nov 4 04:58:24.289954 kubelet[2838]: E1104 04:58:24.289904 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:25.179880 kubelet[2838]: E1104 04:58:25.179832 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:28.419841 sudo[1842]: pam_unix(sudo:session): session closed for user root Nov 4 04:58:28.422203 sshd[1841]: Connection closed by 10.0.0.1 port 37706 Nov 4 04:58:28.423596 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:28.428423 systemd[1]: sshd@8-10.0.0.56:22-10.0.0.1:37706.service: Deactivated successfully. Nov 4 04:58:28.432018 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 04:58:28.432348 systemd[1]: session-9.scope: Consumed 8.073s CPU time, 217M memory peak. Nov 4 04:58:28.436273 systemd-logind[1587]: Session 9 logged out. Waiting for processes to exit. Nov 4 04:58:28.437905 systemd-logind[1587]: Removed session 9. Nov 4 04:58:33.222593 systemd[1]: Created slice kubepods-besteffort-podfe97cd3b_8fcc_455f_83ff_13bf9241e233.slice - libcontainer container kubepods-besteffort-podfe97cd3b_8fcc_455f_83ff_13bf9241e233.slice. Nov 4 04:58:33.270753 kubelet[2838]: I1104 04:58:33.270668 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fe97cd3b-8fcc-455f-83ff-13bf9241e233-typha-certs\") pod \"calico-typha-7ffcdd44d-t5zlm\" (UID: \"fe97cd3b-8fcc-455f-83ff-13bf9241e233\") " pod="calico-system/calico-typha-7ffcdd44d-t5zlm" Nov 4 04:58:33.270753 kubelet[2838]: I1104 04:58:33.270757 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe97cd3b-8fcc-455f-83ff-13bf9241e233-tigera-ca-bundle\") pod \"calico-typha-7ffcdd44d-t5zlm\" (UID: \"fe97cd3b-8fcc-455f-83ff-13bf9241e233\") " pod="calico-system/calico-typha-7ffcdd44d-t5zlm" Nov 4 04:58:33.271551 kubelet[2838]: I1104 04:58:33.270785 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kx7\" (UniqueName: \"kubernetes.io/projected/fe97cd3b-8fcc-455f-83ff-13bf9241e233-kube-api-access-l7kx7\") pod \"calico-typha-7ffcdd44d-t5zlm\" (UID: \"fe97cd3b-8fcc-455f-83ff-13bf9241e233\") " pod="calico-system/calico-typha-7ffcdd44d-t5zlm" Nov 4 04:58:33.410010 systemd[1]: Created slice kubepods-besteffort-podd11f393a_88e7_4af5_9931_816ef392e21d.slice - libcontainer container kubepods-besteffort-podd11f393a_88e7_4af5_9931_816ef392e21d.slice. Nov 4 04:58:33.471843 kubelet[2838]: I1104 04:58:33.471764 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-cni-log-dir\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.471843 kubelet[2838]: I1104 04:58:33.471827 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d746n\" (UniqueName: \"kubernetes.io/projected/d11f393a-88e7-4af5-9931-816ef392e21d-kube-api-access-d746n\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472095 kubelet[2838]: I1104 04:58:33.471848 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-cni-bin-dir\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472095 kubelet[2838]: I1104 04:58:33.471896 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-var-lib-calico\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472095 kubelet[2838]: I1104 04:58:33.471911 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-var-run-calico\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472095 kubelet[2838]: I1104 04:58:33.472000 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-xtables-lock\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472095 kubelet[2838]: I1104 04:58:33.472096 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-cni-net-dir\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472271 kubelet[2838]: I1104 04:58:33.472124 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-lib-modules\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472271 kubelet[2838]: I1104 04:58:33.472215 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f393a-88e7-4af5-9931-816ef392e21d-tigera-ca-bundle\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472410 kubelet[2838]: I1104 04:58:33.472350 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d11f393a-88e7-4af5-9931-816ef392e21d-node-certs\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472457 kubelet[2838]: I1104 04:58:33.472421 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-flexvol-driver-host\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.472488 kubelet[2838]: I1104 04:58:33.472460 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d11f393a-88e7-4af5-9931-816ef392e21d-policysync\") pod \"calico-node-sl8xh\" (UID: \"d11f393a-88e7-4af5-9931-816ef392e21d\") " pod="calico-system/calico-node-sl8xh" Nov 4 04:58:33.530060 kubelet[2838]: E1104 04:58:33.529150 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:33.530176 containerd[1611]: time="2025-11-04T04:58:33.530131417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ffcdd44d-t5zlm,Uid:fe97cd3b-8fcc-455f-83ff-13bf9241e233,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:33.568232 containerd[1611]: time="2025-11-04T04:58:33.567914853Z" level=info msg="connecting to shim daf7ab48d21b442ac590fb8e4407919fedef32e4964142807b8da08672902dbc" address="unix:///run/containerd/s/18955518614bc5b82c9d765262545aa76000de128460eacbc2008a093c72853c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:33.582592 kubelet[2838]: E1104 04:58:33.581398 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.582592 kubelet[2838]: W1104 04:58:33.582175 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.582592 kubelet[2838]: E1104 04:58:33.582224 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.587015 kubelet[2838]: E1104 04:58:33.586817 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.587848 kubelet[2838]: W1104 04:58:33.587485 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.587848 kubelet[2838]: E1104 04:58:33.587519 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.589906 kubelet[2838]: E1104 04:58:33.589809 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.589906 kubelet[2838]: W1104 04:58:33.589825 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.591047 kubelet[2838]: E1104 04:58:33.590305 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.591487 kubelet[2838]: E1104 04:58:33.591434 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.591487 kubelet[2838]: W1104 04:58:33.591448 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.591487 kubelet[2838]: E1104 04:58:33.591464 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.610570 kubelet[2838]: E1104 04:58:33.608566 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:33.611880 kubelet[2838]: E1104 04:58:33.611838 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.612481 kubelet[2838]: W1104 04:58:33.612426 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.612650 kubelet[2838]: E1104 04:58:33.612556 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.637210 systemd[1]: Started cri-containerd-daf7ab48d21b442ac590fb8e4407919fedef32e4964142807b8da08672902dbc.scope - libcontainer container daf7ab48d21b442ac590fb8e4407919fedef32e4964142807b8da08672902dbc. Nov 4 04:58:33.659650 kubelet[2838]: E1104 04:58:33.659516 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.659650 kubelet[2838]: W1104 04:58:33.659547 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.659650 kubelet[2838]: E1104 04:58:33.659575 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.659927 kubelet[2838]: E1104 04:58:33.659862 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.659927 kubelet[2838]: W1104 04:58:33.659875 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.659927 kubelet[2838]: E1104 04:58:33.659888 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.661515 kubelet[2838]: E1104 04:58:33.660157 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.661515 kubelet[2838]: W1104 04:58:33.660177 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.661515 kubelet[2838]: E1104 04:58:33.660190 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.662517 kubelet[2838]: E1104 04:58:33.662484 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.662582 kubelet[2838]: W1104 04:58:33.662511 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.662582 kubelet[2838]: E1104 04:58:33.662559 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.662904 kubelet[2838]: E1104 04:58:33.662865 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.662904 kubelet[2838]: W1104 04:58:33.662886 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.662904 kubelet[2838]: E1104 04:58:33.662898 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.663166 kubelet[2838]: E1104 04:58:33.663135 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.663166 kubelet[2838]: W1104 04:58:33.663156 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.663166 kubelet[2838]: E1104 04:58:33.663168 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.663440 kubelet[2838]: E1104 04:58:33.663362 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.663440 kubelet[2838]: W1104 04:58:33.663374 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.663440 kubelet[2838]: E1104 04:58:33.663385 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665079 kubelet[2838]: E1104 04:58:33.663596 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665079 kubelet[2838]: W1104 04:58:33.663630 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665079 kubelet[2838]: E1104 04:58:33.663641 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665079 kubelet[2838]: E1104 04:58:33.663941 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665079 kubelet[2838]: W1104 04:58:33.663952 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665079 kubelet[2838]: E1104 04:58:33.663963 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665079 kubelet[2838]: E1104 04:58:33.664180 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665079 kubelet[2838]: W1104 04:58:33.664191 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665079 kubelet[2838]: E1104 04:58:33.664202 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665079 kubelet[2838]: E1104 04:58:33.664413 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665448 kubelet[2838]: W1104 04:58:33.664434 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665448 kubelet[2838]: E1104 04:58:33.664445 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665448 kubelet[2838]: E1104 04:58:33.664690 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665448 kubelet[2838]: W1104 04:58:33.664701 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665448 kubelet[2838]: E1104 04:58:33.664750 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665448 kubelet[2838]: E1104 04:58:33.665027 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665448 kubelet[2838]: W1104 04:58:33.665038 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665448 kubelet[2838]: E1104 04:58:33.665052 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665448 kubelet[2838]: E1104 04:58:33.665310 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665448 kubelet[2838]: W1104 04:58:33.665322 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665980 kubelet[2838]: E1104 04:58:33.665333 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665980 kubelet[2838]: E1104 04:58:33.665554 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665980 kubelet[2838]: W1104 04:58:33.665565 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665980 kubelet[2838]: E1104 04:58:33.665576 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.665980 kubelet[2838]: E1104 04:58:33.665836 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.665980 kubelet[2838]: W1104 04:58:33.665848 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.665980 kubelet[2838]: E1104 04:58:33.665859 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.666187 kubelet[2838]: E1104 04:58:33.666089 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.666187 kubelet[2838]: W1104 04:58:33.666101 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.666187 kubelet[2838]: E1104 04:58:33.666112 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.666789 kubelet[2838]: E1104 04:58:33.666326 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.666789 kubelet[2838]: W1104 04:58:33.666344 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.666789 kubelet[2838]: E1104 04:58:33.666356 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.666789 kubelet[2838]: E1104 04:58:33.666694 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.666789 kubelet[2838]: W1104 04:58:33.666706 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.666789 kubelet[2838]: E1104 04:58:33.666728 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.667124 kubelet[2838]: E1104 04:58:33.666949 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.667124 kubelet[2838]: W1104 04:58:33.666961 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.667124 kubelet[2838]: E1104 04:58:33.666972 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.675518 kubelet[2838]: E1104 04:58:33.675117 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.675518 kubelet[2838]: W1104 04:58:33.675149 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.675518 kubelet[2838]: E1104 04:58:33.675177 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.675769 kubelet[2838]: I1104 04:58:33.675670 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bskz\" (UniqueName: \"kubernetes.io/projected/d1afccb9-55ee-4f50-a636-3c55f302f219-kube-api-access-9bskz\") pod \"csi-node-driver-jnpcs\" (UID: \"d1afccb9-55ee-4f50-a636-3c55f302f219\") " pod="calico-system/csi-node-driver-jnpcs" Nov 4 04:58:33.676050 kubelet[2838]: E1104 04:58:33.676027 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.676050 kubelet[2838]: W1104 04:58:33.676046 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.676134 kubelet[2838]: E1104 04:58:33.676059 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.676860 kubelet[2838]: I1104 04:58:33.676603 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d1afccb9-55ee-4f50-a636-3c55f302f219-socket-dir\") pod \"csi-node-driver-jnpcs\" (UID: \"d1afccb9-55ee-4f50-a636-3c55f302f219\") " pod="calico-system/csi-node-driver-jnpcs" Nov 4 04:58:33.677119 kubelet[2838]: E1104 04:58:33.677095 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.677168 kubelet[2838]: W1104 04:58:33.677118 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.677168 kubelet[2838]: E1104 04:58:33.677134 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.677478 kubelet[2838]: E1104 04:58:33.677458 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.677478 kubelet[2838]: W1104 04:58:33.677474 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.677534 kubelet[2838]: E1104 04:58:33.677486 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.677919 kubelet[2838]: E1104 04:58:33.677898 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.677919 kubelet[2838]: W1104 04:58:33.677914 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.677998 kubelet[2838]: E1104 04:58:33.677928 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.678276 kubelet[2838]: E1104 04:58:33.678256 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.678276 kubelet[2838]: W1104 04:58:33.678272 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.678336 kubelet[2838]: E1104 04:58:33.678285 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.678651 kubelet[2838]: E1104 04:58:33.678602 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.678693 kubelet[2838]: W1104 04:58:33.678664 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.678693 kubelet[2838]: E1104 04:58:33.678679 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.678888 kubelet[2838]: I1104 04:58:33.678862 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1afccb9-55ee-4f50-a636-3c55f302f219-kubelet-dir\") pod \"csi-node-driver-jnpcs\" (UID: \"d1afccb9-55ee-4f50-a636-3c55f302f219\") " pod="calico-system/csi-node-driver-jnpcs" Nov 4 04:58:33.679166 kubelet[2838]: E1104 04:58:33.679145 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.679166 kubelet[2838]: W1104 04:58:33.679162 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.679220 kubelet[2838]: E1104 04:58:33.679177 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.679518 kubelet[2838]: E1104 04:58:33.679491 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.679518 kubelet[2838]: W1104 04:58:33.679508 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.679602 kubelet[2838]: E1104 04:58:33.679522 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.679865 kubelet[2838]: E1104 04:58:33.679845 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.679865 kubelet[2838]: W1104 04:58:33.679861 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.679933 kubelet[2838]: E1104 04:58:33.679873 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.679933 kubelet[2838]: I1104 04:58:33.679912 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d1afccb9-55ee-4f50-a636-3c55f302f219-registration-dir\") pod \"csi-node-driver-jnpcs\" (UID: \"d1afccb9-55ee-4f50-a636-3c55f302f219\") " pod="calico-system/csi-node-driver-jnpcs" Nov 4 04:58:33.680239 kubelet[2838]: E1104 04:58:33.680216 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.680239 kubelet[2838]: W1104 04:58:33.680233 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.680309 kubelet[2838]: E1104 04:58:33.680246 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.680309 kubelet[2838]: I1104 04:58:33.680274 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d1afccb9-55ee-4f50-a636-3c55f302f219-varrun\") pod \"csi-node-driver-jnpcs\" (UID: \"d1afccb9-55ee-4f50-a636-3c55f302f219\") " pod="calico-system/csi-node-driver-jnpcs" Nov 4 04:58:33.680569 kubelet[2838]: E1104 04:58:33.680541 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.680569 kubelet[2838]: W1104 04:58:33.680560 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.680680 kubelet[2838]: E1104 04:58:33.680576 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.680811 kubelet[2838]: E1104 04:58:33.680792 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.680811 kubelet[2838]: W1104 04:58:33.680804 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.680871 kubelet[2838]: E1104 04:58:33.680815 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.681051 kubelet[2838]: E1104 04:58:33.681032 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.681051 kubelet[2838]: W1104 04:58:33.681045 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.681127 kubelet[2838]: E1104 04:58:33.681055 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.681237 kubelet[2838]: E1104 04:58:33.681220 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.681237 kubelet[2838]: W1104 04:58:33.681232 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.681312 kubelet[2838]: E1104 04:58:33.681241 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.710153 containerd[1611]: time="2025-11-04T04:58:33.710095995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ffcdd44d-t5zlm,Uid:fe97cd3b-8fcc-455f-83ff-13bf9241e233,Namespace:calico-system,Attempt:0,} returns sandbox id \"daf7ab48d21b442ac590fb8e4407919fedef32e4964142807b8da08672902dbc\"" Nov 4 04:58:33.711116 kubelet[2838]: E1104 04:58:33.711081 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:33.711996 containerd[1611]: time="2025-11-04T04:58:33.711780496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 04:58:33.714953 kubelet[2838]: E1104 04:58:33.714882 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:33.715518 containerd[1611]: time="2025-11-04T04:58:33.715471523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sl8xh,Uid:d11f393a-88e7-4af5-9931-816ef392e21d,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:33.741106 containerd[1611]: time="2025-11-04T04:58:33.740702282Z" level=info msg="connecting to shim 12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271" address="unix:///run/containerd/s/fdd5af401cb4c0b33006e5d6490a1b6118118dceb18dae941feb738b2826e6a5" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:33.782062 kubelet[2838]: E1104 04:58:33.781300 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.782062 kubelet[2838]: W1104 04:58:33.781331 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.782062 kubelet[2838]: E1104 04:58:33.781355 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.782062 kubelet[2838]: E1104 04:58:33.781764 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.782062 kubelet[2838]: W1104 04:58:33.781778 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.782062 kubelet[2838]: E1104 04:58:33.781792 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.782468 kubelet[2838]: E1104 04:58:33.782137 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.782468 kubelet[2838]: W1104 04:58:33.782151 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.782468 kubelet[2838]: E1104 04:58:33.782165 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.782468 kubelet[2838]: E1104 04:58:33.782418 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.782468 kubelet[2838]: W1104 04:58:33.782450 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.782468 kubelet[2838]: E1104 04:58:33.782462 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.782807 kubelet[2838]: E1104 04:58:33.782776 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.782807 kubelet[2838]: W1104 04:58:33.782791 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.782807 kubelet[2838]: E1104 04:58:33.782803 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.783123 kubelet[2838]: E1104 04:58:33.783102 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.783123 kubelet[2838]: W1104 04:58:33.783117 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.783269 kubelet[2838]: E1104 04:58:33.783130 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.783653 kubelet[2838]: E1104 04:58:33.783444 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.783653 kubelet[2838]: W1104 04:58:33.783461 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.783653 kubelet[2838]: E1104 04:58:33.783475 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.784142 systemd[1]: Started cri-containerd-12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271.scope - libcontainer container 12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271. Nov 4 04:58:33.784604 kubelet[2838]: E1104 04:58:33.784584 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.784604 kubelet[2838]: W1104 04:58:33.784600 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.784760 kubelet[2838]: E1104 04:58:33.784651 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.785241 kubelet[2838]: E1104 04:58:33.785222 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.785241 kubelet[2838]: W1104 04:58:33.785239 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.785334 kubelet[2838]: E1104 04:58:33.785252 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.786099 kubelet[2838]: E1104 04:58:33.786082 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.786099 kubelet[2838]: W1104 04:58:33.786094 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.786238 kubelet[2838]: E1104 04:58:33.786104 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.786375 kubelet[2838]: E1104 04:58:33.786357 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.786375 kubelet[2838]: W1104 04:58:33.786373 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.786480 kubelet[2838]: E1104 04:58:33.786385 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.787018 kubelet[2838]: E1104 04:58:33.786998 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.787018 kubelet[2838]: W1104 04:58:33.787014 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.787120 kubelet[2838]: E1104 04:58:33.787028 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.787315 kubelet[2838]: E1104 04:58:33.787301 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.787315 kubelet[2838]: W1104 04:58:33.787315 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.787387 kubelet[2838]: E1104 04:58:33.787326 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.787682 kubelet[2838]: E1104 04:58:33.787666 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.787682 kubelet[2838]: W1104 04:58:33.787681 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.787788 kubelet[2838]: E1104 04:58:33.787692 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.788062 kubelet[2838]: E1104 04:58:33.788043 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.788062 kubelet[2838]: W1104 04:58:33.788058 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.788062 kubelet[2838]: E1104 04:58:33.788071 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.788379 kubelet[2838]: E1104 04:58:33.788360 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.788379 kubelet[2838]: W1104 04:58:33.788374 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.788540 kubelet[2838]: E1104 04:58:33.788386 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.788720 kubelet[2838]: E1104 04:58:33.788682 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.788720 kubelet[2838]: W1104 04:58:33.788696 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.788720 kubelet[2838]: E1104 04:58:33.788719 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.789001 kubelet[2838]: E1104 04:58:33.788977 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.789001 kubelet[2838]: W1104 04:58:33.788991 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.789001 kubelet[2838]: E1104 04:58:33.789002 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.789348 kubelet[2838]: E1104 04:58:33.789331 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.789348 kubelet[2838]: W1104 04:58:33.789346 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.789449 kubelet[2838]: E1104 04:58:33.789358 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.789829 kubelet[2838]: E1104 04:58:33.789797 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.789829 kubelet[2838]: W1104 04:58:33.789813 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.789829 kubelet[2838]: E1104 04:58:33.789825 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.791529 kubelet[2838]: E1104 04:58:33.791509 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.791529 kubelet[2838]: W1104 04:58:33.791525 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.791656 kubelet[2838]: E1104 04:58:33.791537 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.791831 kubelet[2838]: E1104 04:58:33.791815 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.791831 kubelet[2838]: W1104 04:58:33.791832 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.791899 kubelet[2838]: E1104 04:58:33.791844 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.792148 kubelet[2838]: E1104 04:58:33.792128 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.792184 kubelet[2838]: W1104 04:58:33.792147 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.792184 kubelet[2838]: E1104 04:58:33.792160 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.792397 kubelet[2838]: E1104 04:58:33.792379 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.792397 kubelet[2838]: W1104 04:58:33.792395 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.792463 kubelet[2838]: E1104 04:58:33.792408 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.793439 kubelet[2838]: E1104 04:58:33.793008 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.793439 kubelet[2838]: W1104 04:58:33.793023 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.793439 kubelet[2838]: E1104 04:58:33.793035 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.806370 kubelet[2838]: E1104 04:58:33.806327 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:33.806370 kubelet[2838]: W1104 04:58:33.806358 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:33.806567 kubelet[2838]: E1104 04:58:33.806411 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:33.825144 containerd[1611]: time="2025-11-04T04:58:33.825081571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sl8xh,Uid:d11f393a-88e7-4af5-9931-816ef392e21d,Namespace:calico-system,Attempt:0,} returns sandbox id \"12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271\"" Nov 4 04:58:33.825954 kubelet[2838]: E1104 04:58:33.825915 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:35.132059 kubelet[2838]: E1104 04:58:35.131979 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:35.744871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817045641.mount: Deactivated successfully. Nov 4 04:58:36.779442 containerd[1611]: time="2025-11-04T04:58:36.779336351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:36.782983 containerd[1611]: time="2025-11-04T04:58:36.782931667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 4 04:58:36.785754 containerd[1611]: time="2025-11-04T04:58:36.785708217Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:36.793360 containerd[1611]: time="2025-11-04T04:58:36.793288219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:36.794635 containerd[1611]: time="2025-11-04T04:58:36.794048666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.082232293s" Nov 4 04:58:36.794635 containerd[1611]: time="2025-11-04T04:58:36.794097509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 04:58:36.796156 containerd[1611]: time="2025-11-04T04:58:36.796080649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 04:58:36.818068 containerd[1611]: time="2025-11-04T04:58:36.818005087Z" level=info msg="CreateContainer within sandbox \"daf7ab48d21b442ac590fb8e4407919fedef32e4964142807b8da08672902dbc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 04:58:36.839677 containerd[1611]: time="2025-11-04T04:58:36.838412388Z" level=info msg="Container 1d8d251d70b40cfa85ae5a1a4a8896b5d703761794afacd938b0103f8f357d8f: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:36.852647 containerd[1611]: time="2025-11-04T04:58:36.852568339Z" level=info msg="CreateContainer within sandbox \"daf7ab48d21b442ac590fb8e4407919fedef32e4964142807b8da08672902dbc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d8d251d70b40cfa85ae5a1a4a8896b5d703761794afacd938b0103f8f357d8f\"" Nov 4 04:58:36.853545 containerd[1611]: time="2025-11-04T04:58:36.853499707Z" level=info msg="StartContainer for \"1d8d251d70b40cfa85ae5a1a4a8896b5d703761794afacd938b0103f8f357d8f\"" Nov 4 04:58:36.855170 containerd[1611]: time="2025-11-04T04:58:36.855136639Z" level=info msg="connecting to shim 1d8d251d70b40cfa85ae5a1a4a8896b5d703761794afacd938b0103f8f357d8f" address="unix:///run/containerd/s/18955518614bc5b82c9d765262545aa76000de128460eacbc2008a093c72853c" protocol=ttrpc version=3 Nov 4 04:58:36.898039 systemd[1]: Started cri-containerd-1d8d251d70b40cfa85ae5a1a4a8896b5d703761794afacd938b0103f8f357d8f.scope - libcontainer container 1d8d251d70b40cfa85ae5a1a4a8896b5d703761794afacd938b0103f8f357d8f. Nov 4 04:58:36.984580 containerd[1611]: time="2025-11-04T04:58:36.984432833Z" level=info msg="StartContainer for \"1d8d251d70b40cfa85ae5a1a4a8896b5d703761794afacd938b0103f8f357d8f\" returns successfully" Nov 4 04:58:37.131656 kubelet[2838]: E1104 04:58:37.131519 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:37.248317 kubelet[2838]: E1104 04:58:37.248258 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:37.292059 kubelet[2838]: E1104 04:58:37.292009 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.292861 kubelet[2838]: W1104 04:58:37.292824 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.292962 kubelet[2838]: E1104 04:58:37.292866 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.293209 kubelet[2838]: E1104 04:58:37.293188 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.293292 kubelet[2838]: W1104 04:58:37.293227 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.293292 kubelet[2838]: E1104 04:58:37.293240 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.293501 kubelet[2838]: E1104 04:58:37.293480 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.293681 kubelet[2838]: W1104 04:58:37.293495 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.293681 kubelet[2838]: E1104 04:58:37.293525 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.294632 kubelet[2838]: E1104 04:58:37.293865 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.294632 kubelet[2838]: W1104 04:58:37.293881 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.294632 kubelet[2838]: E1104 04:58:37.293894 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.296045 kubelet[2838]: E1104 04:58:37.296023 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.296045 kubelet[2838]: W1104 04:58:37.296040 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.296150 kubelet[2838]: E1104 04:58:37.296058 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.296380 kubelet[2838]: E1104 04:58:37.296356 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.296380 kubelet[2838]: W1104 04:58:37.296374 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.296489 kubelet[2838]: E1104 04:58:37.296387 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.296712 kubelet[2838]: E1104 04:58:37.296687 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.296712 kubelet[2838]: W1104 04:58:37.296704 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.296810 kubelet[2838]: E1104 04:58:37.296717 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.297094 kubelet[2838]: E1104 04:58:37.297069 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.297094 kubelet[2838]: W1104 04:58:37.297086 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.297173 kubelet[2838]: E1104 04:58:37.297098 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.297453 kubelet[2838]: E1104 04:58:37.297425 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.297453 kubelet[2838]: W1104 04:58:37.297449 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.297530 kubelet[2838]: E1104 04:58:37.297463 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.297800 kubelet[2838]: E1104 04:58:37.297777 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.297859 kubelet[2838]: W1104 04:58:37.297794 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.297859 kubelet[2838]: E1104 04:58:37.297830 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.299214 kubelet[2838]: E1104 04:58:37.299189 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.299214 kubelet[2838]: W1104 04:58:37.299209 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.299319 kubelet[2838]: E1104 04:58:37.299223 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.300934 kubelet[2838]: E1104 04:58:37.300721 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.300934 kubelet[2838]: W1104 04:58:37.300740 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.300934 kubelet[2838]: E1104 04:58:37.300753 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.301300 kubelet[2838]: E1104 04:58:37.301267 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.301300 kubelet[2838]: W1104 04:58:37.301286 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.301300 kubelet[2838]: E1104 04:58:37.301301 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.302230 kubelet[2838]: E1104 04:58:37.302102 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.302230 kubelet[2838]: W1104 04:58:37.302117 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.302230 kubelet[2838]: E1104 04:58:37.302130 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.302571 kubelet[2838]: E1104 04:58:37.302513 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.302571 kubelet[2838]: W1104 04:58:37.302527 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.302571 kubelet[2838]: E1104 04:58:37.302540 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.319195 kubelet[2838]: E1104 04:58:37.319141 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.319195 kubelet[2838]: W1104 04:58:37.319182 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.319437 kubelet[2838]: E1104 04:58:37.319216 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.320110 kubelet[2838]: E1104 04:58:37.319899 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.320110 kubelet[2838]: W1104 04:58:37.320107 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.320194 kubelet[2838]: E1104 04:58:37.320124 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.321628 kubelet[2838]: E1104 04:58:37.320779 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.321725 kubelet[2838]: W1104 04:58:37.321605 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.321725 kubelet[2838]: E1104 04:58:37.321654 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.322059 kubelet[2838]: E1104 04:58:37.322033 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.322059 kubelet[2838]: W1104 04:58:37.322053 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.322163 kubelet[2838]: E1104 04:58:37.322066 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.322904 kubelet[2838]: E1104 04:58:37.322879 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.322904 kubelet[2838]: W1104 04:58:37.322899 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.323004 kubelet[2838]: E1104 04:58:37.322913 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.323213 kubelet[2838]: E1104 04:58:37.323188 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.323213 kubelet[2838]: W1104 04:58:37.323206 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.323310 kubelet[2838]: E1104 04:58:37.323222 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.323575 kubelet[2838]: E1104 04:58:37.323552 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.323575 kubelet[2838]: W1104 04:58:37.323570 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.323709 kubelet[2838]: E1104 04:58:37.323583 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.324084 kubelet[2838]: E1104 04:58:37.324004 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.324084 kubelet[2838]: W1104 04:58:37.324024 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.324084 kubelet[2838]: E1104 04:58:37.324037 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.324400 kubelet[2838]: E1104 04:58:37.324307 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.324400 kubelet[2838]: W1104 04:58:37.324327 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.324400 kubelet[2838]: E1104 04:58:37.324340 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.324692 kubelet[2838]: E1104 04:58:37.324670 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.324692 kubelet[2838]: W1104 04:58:37.324687 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.324692 kubelet[2838]: E1104 04:58:37.324700 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.325087 kubelet[2838]: E1104 04:58:37.324953 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.325087 kubelet[2838]: W1104 04:58:37.324965 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.325087 kubelet[2838]: E1104 04:58:37.324976 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.325396 kubelet[2838]: E1104 04:58:37.325374 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.325396 kubelet[2838]: W1104 04:58:37.325393 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.325768 kubelet[2838]: E1104 04:58:37.325409 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.326637 kubelet[2838]: E1104 04:58:37.326155 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.326637 kubelet[2838]: W1104 04:58:37.326176 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.326637 kubelet[2838]: E1104 04:58:37.326189 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.327089 kubelet[2838]: E1104 04:58:37.327061 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.327089 kubelet[2838]: W1104 04:58:37.327083 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.327165 kubelet[2838]: E1104 04:58:37.327098 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.329435 kubelet[2838]: E1104 04:58:37.329392 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.329435 kubelet[2838]: W1104 04:58:37.329420 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.329574 kubelet[2838]: E1104 04:58:37.329441 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.330123 kubelet[2838]: E1104 04:58:37.330089 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.330123 kubelet[2838]: W1104 04:58:37.330109 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.330123 kubelet[2838]: E1104 04:58:37.330121 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.330503 kubelet[2838]: E1104 04:58:37.330482 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.330503 kubelet[2838]: W1104 04:58:37.330498 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.330752 kubelet[2838]: E1104 04:58:37.330510 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:37.331013 kubelet[2838]: E1104 04:58:37.330983 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:37.331013 kubelet[2838]: W1104 04:58:37.331005 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:37.331092 kubelet[2838]: E1104 04:58:37.331018 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.250067 kubelet[2838]: I1104 04:58:38.250012 2838 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:58:38.250599 kubelet[2838]: E1104 04:58:38.250393 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:38.311072 kubelet[2838]: E1104 04:58:38.311022 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.311072 kubelet[2838]: W1104 04:58:38.311054 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.311072 kubelet[2838]: E1104 04:58:38.311082 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.311349 kubelet[2838]: E1104 04:58:38.311322 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.311349 kubelet[2838]: W1104 04:58:38.311332 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.311349 kubelet[2838]: E1104 04:58:38.311342 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.311796 kubelet[2838]: E1104 04:58:38.311748 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.311796 kubelet[2838]: W1104 04:58:38.311782 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.312036 kubelet[2838]: E1104 04:58:38.311818 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.312212 kubelet[2838]: E1104 04:58:38.312177 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.312212 kubelet[2838]: W1104 04:58:38.312193 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.312212 kubelet[2838]: E1104 04:58:38.312205 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.312451 kubelet[2838]: E1104 04:58:38.312437 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.312451 kubelet[2838]: W1104 04:58:38.312446 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.312518 kubelet[2838]: E1104 04:58:38.312459 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.312704 kubelet[2838]: E1104 04:58:38.312686 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.312704 kubelet[2838]: W1104 04:58:38.312699 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.312797 kubelet[2838]: E1104 04:58:38.312710 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.312914 kubelet[2838]: E1104 04:58:38.312895 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.312914 kubelet[2838]: W1104 04:58:38.312908 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.312983 kubelet[2838]: E1104 04:58:38.312917 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.313118 kubelet[2838]: E1104 04:58:38.313100 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.313118 kubelet[2838]: W1104 04:58:38.313113 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.313189 kubelet[2838]: E1104 04:58:38.313123 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.313331 kubelet[2838]: E1104 04:58:38.313313 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.313331 kubelet[2838]: W1104 04:58:38.313325 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.313398 kubelet[2838]: E1104 04:58:38.313336 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.313547 kubelet[2838]: E1104 04:58:38.313528 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.313547 kubelet[2838]: W1104 04:58:38.313541 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.313638 kubelet[2838]: E1104 04:58:38.313553 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.313787 kubelet[2838]: E1104 04:58:38.313768 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.313787 kubelet[2838]: W1104 04:58:38.313781 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.313854 kubelet[2838]: E1104 04:58:38.313791 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.313994 kubelet[2838]: E1104 04:58:38.313977 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.313994 kubelet[2838]: W1104 04:58:38.313990 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.314060 kubelet[2838]: E1104 04:58:38.314000 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.314213 kubelet[2838]: E1104 04:58:38.314195 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.314213 kubelet[2838]: W1104 04:58:38.314208 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.314276 kubelet[2838]: E1104 04:58:38.314219 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.314426 kubelet[2838]: E1104 04:58:38.314408 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.314426 kubelet[2838]: W1104 04:58:38.314421 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.314491 kubelet[2838]: E1104 04:58:38.314431 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.314674 kubelet[2838]: E1104 04:58:38.314655 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.314674 kubelet[2838]: W1104 04:58:38.314669 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.314746 kubelet[2838]: E1104 04:58:38.314679 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.329468 kubelet[2838]: E1104 04:58:38.329411 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.329468 kubelet[2838]: W1104 04:58:38.329435 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.329468 kubelet[2838]: E1104 04:58:38.329458 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.329789 kubelet[2838]: E1104 04:58:38.329756 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.329789 kubelet[2838]: W1104 04:58:38.329765 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.329789 kubelet[2838]: E1104 04:58:38.329775 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.330155 kubelet[2838]: E1104 04:58:38.330096 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.330155 kubelet[2838]: W1104 04:58:38.330144 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.330228 kubelet[2838]: E1104 04:58:38.330184 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.330494 kubelet[2838]: E1104 04:58:38.330462 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.330494 kubelet[2838]: W1104 04:58:38.330480 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.330494 kubelet[2838]: E1104 04:58:38.330491 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.330745 kubelet[2838]: E1104 04:58:38.330727 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.330745 kubelet[2838]: W1104 04:58:38.330739 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.330745 kubelet[2838]: E1104 04:58:38.330747 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.331010 kubelet[2838]: E1104 04:58:38.330983 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.331010 kubelet[2838]: W1104 04:58:38.330996 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.331010 kubelet[2838]: E1104 04:58:38.331004 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.331254 kubelet[2838]: E1104 04:58:38.331235 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.331254 kubelet[2838]: W1104 04:58:38.331246 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.331254 kubelet[2838]: E1104 04:58:38.331256 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.331497 kubelet[2838]: E1104 04:58:38.331462 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.331497 kubelet[2838]: W1104 04:58:38.331474 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.331497 kubelet[2838]: E1104 04:58:38.331483 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.331831 kubelet[2838]: E1104 04:58:38.331793 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.331882 kubelet[2838]: W1104 04:58:38.331830 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.331882 kubelet[2838]: E1104 04:58:38.331865 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.332180 kubelet[2838]: E1104 04:58:38.332149 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.332180 kubelet[2838]: W1104 04:58:38.332164 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.332180 kubelet[2838]: E1104 04:58:38.332176 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.332405 kubelet[2838]: E1104 04:58:38.332385 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.332405 kubelet[2838]: W1104 04:58:38.332399 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.332476 kubelet[2838]: E1104 04:58:38.332410 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.332729 kubelet[2838]: E1104 04:58:38.332708 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.332729 kubelet[2838]: W1104 04:58:38.332722 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.332805 kubelet[2838]: E1104 04:58:38.332734 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.333175 kubelet[2838]: E1104 04:58:38.333137 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.333175 kubelet[2838]: W1104 04:58:38.333162 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.333254 kubelet[2838]: E1104 04:58:38.333185 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.333422 kubelet[2838]: E1104 04:58:38.333397 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.333422 kubelet[2838]: W1104 04:58:38.333409 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.333422 kubelet[2838]: E1104 04:58:38.333418 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.333859 kubelet[2838]: E1104 04:58:38.333641 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.333859 kubelet[2838]: W1104 04:58:38.333663 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.333859 kubelet[2838]: E1104 04:58:38.333672 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.333987 kubelet[2838]: E1104 04:58:38.333901 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.333987 kubelet[2838]: W1104 04:58:38.333912 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.333987 kubelet[2838]: E1104 04:58:38.333924 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.334234 kubelet[2838]: E1104 04:58:38.334202 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.334234 kubelet[2838]: W1104 04:58:38.334218 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.334234 kubelet[2838]: E1104 04:58:38.334229 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.334469 kubelet[2838]: E1104 04:58:38.334450 2838 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:38.334469 kubelet[2838]: W1104 04:58:38.334463 2838 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:38.334539 kubelet[2838]: E1104 04:58:38.334472 2838 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:38.461781 containerd[1611]: time="2025-11-04T04:58:38.461695850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:38.466263 containerd[1611]: time="2025-11-04T04:58:38.466201855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:38.471654 containerd[1611]: time="2025-11-04T04:58:38.470860094Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:38.476353 containerd[1611]: time="2025-11-04T04:58:38.476268552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:38.478654 containerd[1611]: time="2025-11-04T04:58:38.477114539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.680960252s" Nov 4 04:58:38.478654 containerd[1611]: time="2025-11-04T04:58:38.477153903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 04:58:38.484040 containerd[1611]: time="2025-11-04T04:58:38.483973317Z" level=info msg="CreateContainer within sandbox \"12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 04:58:38.500512 containerd[1611]: time="2025-11-04T04:58:38.500292064Z" level=info msg="Container fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:38.517908 containerd[1611]: time="2025-11-04T04:58:38.517813558Z" level=info msg="CreateContainer within sandbox \"12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb\"" Nov 4 04:58:38.519280 containerd[1611]: time="2025-11-04T04:58:38.518573534Z" level=info msg="StartContainer for \"fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb\"" Nov 4 04:58:38.520499 containerd[1611]: time="2025-11-04T04:58:38.520454933Z" level=info msg="connecting to shim fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb" address="unix:///run/containerd/s/fdd5af401cb4c0b33006e5d6490a1b6118118dceb18dae941feb738b2826e6a5" protocol=ttrpc version=3 Nov 4 04:58:38.556911 systemd[1]: Started cri-containerd-fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb.scope - libcontainer container fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb. Nov 4 04:58:38.634058 containerd[1611]: time="2025-11-04T04:58:38.633983574Z" level=info msg="StartContainer for \"fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb\" returns successfully" Nov 4 04:58:38.653083 systemd[1]: cri-containerd-fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb.scope: Deactivated successfully. Nov 4 04:58:38.659926 containerd[1611]: time="2025-11-04T04:58:38.659875071Z" level=info msg="received exit event container_id:\"fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb\" id:\"fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb\" pid:3569 exited_at:{seconds:1762232318 nanos:659054000}" Nov 4 04:58:38.693976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbeed2dad8af4b23cde1088dd63ab36784acfb424ded4f3a3ab212c929b1fafb-rootfs.mount: Deactivated successfully. Nov 4 04:58:39.132080 kubelet[2838]: E1104 04:58:39.132008 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:39.254227 kubelet[2838]: E1104 04:58:39.254118 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:39.347260 kubelet[2838]: I1104 04:58:39.347182 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ffcdd44d-t5zlm" podStartSLOduration=3.263421128 podStartE2EDuration="6.347161421s" podCreationTimestamp="2025-11-04 04:58:33 +0000 UTC" firstStartedPulling="2025-11-04 04:58:33.71153765 +0000 UTC m=+22.695069807" lastFinishedPulling="2025-11-04 04:58:36.795277933 +0000 UTC m=+25.778810100" observedRunningTime="2025-11-04 04:58:37.305949794 +0000 UTC m=+26.289481951" watchObservedRunningTime="2025-11-04 04:58:39.347161421 +0000 UTC m=+28.330693588" Nov 4 04:58:40.258255 kubelet[2838]: E1104 04:58:40.258213 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:40.259103 containerd[1611]: time="2025-11-04T04:58:40.258981297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 04:58:41.131917 kubelet[2838]: E1104 04:58:41.131858 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:43.132092 kubelet[2838]: E1104 04:58:43.132009 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:43.176046 containerd[1611]: time="2025-11-04T04:58:43.175953962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:43.177247 containerd[1611]: time="2025-11-04T04:58:43.177183939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 4 04:58:43.178953 containerd[1611]: time="2025-11-04T04:58:43.178913713Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:43.183476 containerd[1611]: time="2025-11-04T04:58:43.183421169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:43.184624 containerd[1611]: time="2025-11-04T04:58:43.184533656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.925421604s" Nov 4 04:58:43.184624 containerd[1611]: time="2025-11-04T04:58:43.184575153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 04:58:43.189880 containerd[1611]: time="2025-11-04T04:58:43.189823419Z" level=info msg="CreateContainer within sandbox \"12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 04:58:43.207700 containerd[1611]: time="2025-11-04T04:58:43.206929988Z" level=info msg="Container ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:43.220986 containerd[1611]: time="2025-11-04T04:58:43.220916595Z" level=info msg="CreateContainer within sandbox \"12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394\"" Nov 4 04:58:43.221639 containerd[1611]: time="2025-11-04T04:58:43.221560613Z" level=info msg="StartContainer for \"ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394\"" Nov 4 04:58:43.223181 containerd[1611]: time="2025-11-04T04:58:43.223142159Z" level=info msg="connecting to shim ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394" address="unix:///run/containerd/s/fdd5af401cb4c0b33006e5d6490a1b6118118dceb18dae941feb738b2826e6a5" protocol=ttrpc version=3 Nov 4 04:58:43.250915 systemd[1]: Started cri-containerd-ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394.scope - libcontainer container ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394. Nov 4 04:58:43.315760 containerd[1611]: time="2025-11-04T04:58:43.315708171Z" level=info msg="StartContainer for \"ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394\" returns successfully" Nov 4 04:58:44.280683 kubelet[2838]: E1104 04:58:44.280601 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:45.133049 kubelet[2838]: E1104 04:58:45.132425 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:45.197527 kubelet[2838]: I1104 04:58:45.197474 2838 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:58:45.198096 kubelet[2838]: E1104 04:58:45.198071 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:45.283578 kubelet[2838]: E1104 04:58:45.282915 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:45.283578 kubelet[2838]: E1104 04:58:45.283152 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:45.571380 systemd[1]: cri-containerd-ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394.scope: Deactivated successfully. Nov 4 04:58:45.572064 systemd[1]: cri-containerd-ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394.scope: Consumed 749ms CPU time, 178.3M memory peak, 3.8M read from disk, 171.3M written to disk. Nov 4 04:58:45.573474 containerd[1611]: time="2025-11-04T04:58:45.573418339Z" level=info msg="received exit event container_id:\"ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394\" id:\"ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394\" pid:3629 exited_at:{seconds:1762232325 nanos:572066554}" Nov 4 04:58:45.600278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac487a198eabad873b76973e582155f29ee42edb239e49d4cb573c5040439394-rootfs.mount: Deactivated successfully. Nov 4 04:58:45.666908 kubelet[2838]: I1104 04:58:45.666868 2838 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 04:58:46.664744 systemd[1]: Created slice kubepods-besteffort-pod064e64ed_c9da_415f_83fd_b39b97fd06e6.slice - libcontainer container kubepods-besteffort-pod064e64ed_c9da_415f_83fd_b39b97fd06e6.slice. Nov 4 04:58:46.833589 kubelet[2838]: I1104 04:58:46.833493 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5x6\" (UniqueName: \"kubernetes.io/projected/064e64ed-c9da-415f-83fd-b39b97fd06e6-kube-api-access-kd5x6\") pod \"whisker-cc9db6f58-6rd2g\" (UID: \"064e64ed-c9da-415f-83fd-b39b97fd06e6\") " pod="calico-system/whisker-cc9db6f58-6rd2g" Nov 4 04:58:46.833589 kubelet[2838]: I1104 04:58:46.833572 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-ca-bundle\") pod \"whisker-cc9db6f58-6rd2g\" (UID: \"064e64ed-c9da-415f-83fd-b39b97fd06e6\") " pod="calico-system/whisker-cc9db6f58-6rd2g" Nov 4 04:58:46.833589 kubelet[2838]: I1104 04:58:46.833604 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-backend-key-pair\") pod \"whisker-cc9db6f58-6rd2g\" (UID: \"064e64ed-c9da-415f-83fd-b39b97fd06e6\") " pod="calico-system/whisker-cc9db6f58-6rd2g" Nov 4 04:58:46.953055 systemd[1]: Created slice kubepods-burstable-podb662e523_ecd3_47d8_8489_153b58632cbe.slice - libcontainer container kubepods-burstable-podb662e523_ecd3_47d8_8489_153b58632cbe.slice. Nov 4 04:58:47.137171 kubelet[2838]: I1104 04:58:47.137054 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b662e523-ecd3-47d8-8489-153b58632cbe-config-volume\") pod \"coredns-674b8bbfcf-tdwlc\" (UID: \"b662e523-ecd3-47d8-8489-153b58632cbe\") " pod="kube-system/coredns-674b8bbfcf-tdwlc" Nov 4 04:58:47.137171 kubelet[2838]: I1104 04:58:47.137116 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcsh\" (UniqueName: \"kubernetes.io/projected/b662e523-ecd3-47d8-8489-153b58632cbe-kube-api-access-7kcsh\") pod \"coredns-674b8bbfcf-tdwlc\" (UID: \"b662e523-ecd3-47d8-8489-153b58632cbe\") " pod="kube-system/coredns-674b8bbfcf-tdwlc" Nov 4 04:58:47.178270 systemd[1]: Created slice kubepods-besteffort-pod5ad3df36_c874_47aa_a593_08839096e8e7.slice - libcontainer container kubepods-besteffort-pod5ad3df36_c874_47aa_a593_08839096e8e7.slice. Nov 4 04:58:47.184534 systemd[1]: Created slice kubepods-besteffort-podd1afccb9_55ee_4f50_a636_3c55f302f219.slice - libcontainer container kubepods-besteffort-podd1afccb9_55ee_4f50_a636_3c55f302f219.slice. Nov 4 04:58:47.187433 containerd[1611]: time="2025-11-04T04:58:47.187380249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnpcs,Uid:d1afccb9-55ee-4f50-a636-3c55f302f219,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:47.238423 kubelet[2838]: I1104 04:58:47.238259 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ad3df36-c874-47aa-a593-08839096e8e7-calico-apiserver-certs\") pod \"calico-apiserver-77f5f6cfbf-gqz2h\" (UID: \"5ad3df36-c874-47aa-a593-08839096e8e7\") " pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" Nov 4 04:58:47.238423 kubelet[2838]: I1104 04:58:47.238324 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6677\" (UniqueName: \"kubernetes.io/projected/5ad3df36-c874-47aa-a593-08839096e8e7-kube-api-access-z6677\") pod \"calico-apiserver-77f5f6cfbf-gqz2h\" (UID: \"5ad3df36-c874-47aa-a593-08839096e8e7\") " pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" Nov 4 04:58:47.268546 containerd[1611]: time="2025-11-04T04:58:47.268432306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cc9db6f58-6rd2g,Uid:064e64ed-c9da-415f-83fd-b39b97fd06e6,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:47.339738 kubelet[2838]: I1104 04:58:47.339226 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb84w\" (UniqueName: \"kubernetes.io/projected/898460cc-e838-40bf-8726-99fe8a847f0f-kube-api-access-qb84w\") pod \"coredns-674b8bbfcf-ckpk9\" (UID: \"898460cc-e838-40bf-8726-99fe8a847f0f\") " pod="kube-system/coredns-674b8bbfcf-ckpk9" Nov 4 04:58:47.339738 kubelet[2838]: I1104 04:58:47.339283 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/898460cc-e838-40bf-8726-99fe8a847f0f-config-volume\") pod \"coredns-674b8bbfcf-ckpk9\" (UID: \"898460cc-e838-40bf-8726-99fe8a847f0f\") " pod="kube-system/coredns-674b8bbfcf-ckpk9" Nov 4 04:58:47.347890 systemd[1]: Created slice kubepods-burstable-pod898460cc_e838_40bf_8726_99fe8a847f0f.slice - libcontainer container kubepods-burstable-pod898460cc_e838_40bf_8726_99fe8a847f0f.slice. Nov 4 04:58:47.471442 kubelet[2838]: E1104 04:58:47.471187 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:47.475961 containerd[1611]: time="2025-11-04T04:58:47.475894304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 04:58:47.480335 systemd[1]: Created slice kubepods-besteffort-podd1d7ce6c_768a_492d_b1d6_e8d5ad10d6d6.slice - libcontainer container kubepods-besteffort-podd1d7ce6c_768a_492d_b1d6_e8d5ad10d6d6.slice. Nov 4 04:58:47.482260 containerd[1611]: time="2025-11-04T04:58:47.482191535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-gqz2h,Uid:5ad3df36-c874-47aa-a593-08839096e8e7,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:58:47.542000 kubelet[2838]: I1104 04:58:47.541334 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvd4\" (UniqueName: \"kubernetes.io/projected/d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6-kube-api-access-nrvd4\") pod \"goldmane-666569f655-r89mn\" (UID: \"d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6\") " pod="calico-system/goldmane-666569f655-r89mn" Nov 4 04:58:47.542000 kubelet[2838]: I1104 04:58:47.541441 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6-goldmane-ca-bundle\") pod \"goldmane-666569f655-r89mn\" (UID: \"d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6\") " pod="calico-system/goldmane-666569f655-r89mn" Nov 4 04:58:47.542000 kubelet[2838]: I1104 04:58:47.541478 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6-goldmane-key-pair\") pod \"goldmane-666569f655-r89mn\" (UID: \"d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6\") " pod="calico-system/goldmane-666569f655-r89mn" Nov 4 04:58:47.542000 kubelet[2838]: I1104 04:58:47.541581 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6-config\") pod \"goldmane-666569f655-r89mn\" (UID: \"d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6\") " pod="calico-system/goldmane-666569f655-r89mn" Nov 4 04:58:47.556906 kubelet[2838]: E1104 04:58:47.556820 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:47.557786 containerd[1611]: time="2025-11-04T04:58:47.557700906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tdwlc,Uid:b662e523-ecd3-47d8-8489-153b58632cbe,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:47.642154 kubelet[2838]: I1104 04:58:47.642036 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c74c150-52d9-4d9d-b4fe-59734b73de89-tigera-ca-bundle\") pod \"calico-kube-controllers-86b5f8584f-qczbm\" (UID: \"1c74c150-52d9-4d9d-b4fe-59734b73de89\") " pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" Nov 4 04:58:47.642154 kubelet[2838]: I1104 04:58:47.642099 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkf78\" (UniqueName: \"kubernetes.io/projected/52545a29-818f-419d-b4f9-3a5f212c18e5-kube-api-access-tkf78\") pod \"calico-apiserver-77f5f6cfbf-t46fb\" (UID: \"52545a29-818f-419d-b4f9-3a5f212c18e5\") " pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" Nov 4 04:58:47.642154 kubelet[2838]: I1104 04:58:47.642130 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2stn\" (UniqueName: \"kubernetes.io/projected/1c74c150-52d9-4d9d-b4fe-59734b73de89-kube-api-access-c2stn\") pod \"calico-kube-controllers-86b5f8584f-qczbm\" (UID: \"1c74c150-52d9-4d9d-b4fe-59734b73de89\") " pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" Nov 4 04:58:47.642397 kubelet[2838]: I1104 04:58:47.642179 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52545a29-818f-419d-b4f9-3a5f212c18e5-calico-apiserver-certs\") pod \"calico-apiserver-77f5f6cfbf-t46fb\" (UID: \"52545a29-818f-419d-b4f9-3a5f212c18e5\") " pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" Nov 4 04:58:47.649188 systemd[1]: Created slice kubepods-besteffort-pod1c74c150_52d9_4d9d_b4fe_59734b73de89.slice - libcontainer container kubepods-besteffort-pod1c74c150_52d9_4d9d_b4fe_59734b73de89.slice. Nov 4 04:58:47.651027 kubelet[2838]: E1104 04:58:47.650946 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:47.651749 containerd[1611]: time="2025-11-04T04:58:47.651670471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckpk9,Uid:898460cc-e838-40bf-8726-99fe8a847f0f,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:47.656383 systemd[1]: Created slice kubepods-besteffort-pod52545a29_818f_419d_b4f9_3a5f212c18e5.slice - libcontainer container kubepods-besteffort-pod52545a29_818f_419d_b4f9_3a5f212c18e5.slice. Nov 4 04:58:47.785087 containerd[1611]: time="2025-11-04T04:58:47.785024788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r89mn,Uid:d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:47.901204 containerd[1611]: time="2025-11-04T04:58:47.901127026Z" level=error msg="Failed to destroy network for sandbox \"0538a3b03d8072da072bf99ba18b8f04d1867fb1def308ace0453bc46afb861d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:47.934944 containerd[1611]: time="2025-11-04T04:58:47.934875638Z" level=error msg="Failed to destroy network for sandbox \"415e4041153e028cd9114b1c918f5f7ed5f4a6986a4ef8e5dc984d8a23defc3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:47.954186 containerd[1611]: time="2025-11-04T04:58:47.954138297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b5f8584f-qczbm,Uid:1c74c150-52d9-4d9d-b4fe-59734b73de89,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:47.960856 containerd[1611]: time="2025-11-04T04:58:47.960815201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-t46fb,Uid:52545a29-818f-419d-b4f9-3a5f212c18e5,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:58:48.062911 containerd[1611]: time="2025-11-04T04:58:48.062827740Z" level=error msg="Failed to destroy network for sandbox \"69c24e45e984a5cd3b452fa8fc4875cfaae653a678778c1b038ddec3589cbb80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.067662 systemd[1]: run-netns-cni\x2daebae960\x2de2a1\x2d662b\x2d4123\x2d0a3686bfab5d.mount: Deactivated successfully. Nov 4 04:58:48.515960 containerd[1611]: time="2025-11-04T04:58:48.515884642Z" level=error msg="Failed to destroy network for sandbox \"33ef23237b8bec2d57820c96e5b9f4b40f3b270e75d22a8dd28b89c95ec5d99b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.518688 systemd[1]: run-netns-cni\x2d24e21552\x2d7d2d\x2d48ce\x2d5b35\x2d5e3a2ee3c7ee.mount: Deactivated successfully. Nov 4 04:58:48.736474 containerd[1611]: time="2025-11-04T04:58:48.736382563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnpcs,Uid:d1afccb9-55ee-4f50-a636-3c55f302f219,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0538a3b03d8072da072bf99ba18b8f04d1867fb1def308ace0453bc46afb861d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.737832 kubelet[2838]: E1104 04:58:48.736796 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0538a3b03d8072da072bf99ba18b8f04d1867fb1def308ace0453bc46afb861d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.737832 kubelet[2838]: E1104 04:58:48.736912 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0538a3b03d8072da072bf99ba18b8f04d1867fb1def308ace0453bc46afb861d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnpcs" Nov 4 04:58:48.737832 kubelet[2838]: E1104 04:58:48.736947 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0538a3b03d8072da072bf99ba18b8f04d1867fb1def308ace0453bc46afb861d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnpcs" Nov 4 04:58:48.739162 kubelet[2838]: E1104 04:58:48.737018 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0538a3b03d8072da072bf99ba18b8f04d1867fb1def308ace0453bc46afb861d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:58:48.741749 containerd[1611]: time="2025-11-04T04:58:48.741222912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cc9db6f58-6rd2g,Uid:064e64ed-c9da-415f-83fd-b39b97fd06e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"415e4041153e028cd9114b1c918f5f7ed5f4a6986a4ef8e5dc984d8a23defc3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.742054 kubelet[2838]: E1104 04:58:48.741850 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"415e4041153e028cd9114b1c918f5f7ed5f4a6986a4ef8e5dc984d8a23defc3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.742054 kubelet[2838]: E1104 04:58:48.741985 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"415e4041153e028cd9114b1c918f5f7ed5f4a6986a4ef8e5dc984d8a23defc3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cc9db6f58-6rd2g" Nov 4 04:58:48.742054 kubelet[2838]: E1104 04:58:48.742024 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"415e4041153e028cd9114b1c918f5f7ed5f4a6986a4ef8e5dc984d8a23defc3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cc9db6f58-6rd2g" Nov 4 04:58:48.742195 kubelet[2838]: E1104 04:58:48.742120 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cc9db6f58-6rd2g_calico-system(064e64ed-c9da-415f-83fd-b39b97fd06e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cc9db6f58-6rd2g_calico-system(064e64ed-c9da-415f-83fd-b39b97fd06e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"415e4041153e028cd9114b1c918f5f7ed5f4a6986a4ef8e5dc984d8a23defc3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cc9db6f58-6rd2g" podUID="064e64ed-c9da-415f-83fd-b39b97fd06e6" Nov 4 04:58:48.757798 containerd[1611]: time="2025-11-04T04:58:48.757736435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tdwlc,Uid:b662e523-ecd3-47d8-8489-153b58632cbe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c24e45e984a5cd3b452fa8fc4875cfaae653a678778c1b038ddec3589cbb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.758473 kubelet[2838]: E1104 04:58:48.758390 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c24e45e984a5cd3b452fa8fc4875cfaae653a678778c1b038ddec3589cbb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.758663 kubelet[2838]: E1104 04:58:48.758590 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c24e45e984a5cd3b452fa8fc4875cfaae653a678778c1b038ddec3589cbb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tdwlc" Nov 4 04:58:48.758663 kubelet[2838]: E1104 04:58:48.758645 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c24e45e984a5cd3b452fa8fc4875cfaae653a678778c1b038ddec3589cbb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tdwlc" Nov 4 04:58:48.758769 kubelet[2838]: E1104 04:58:48.758718 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tdwlc_kube-system(b662e523-ecd3-47d8-8489-153b58632cbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tdwlc_kube-system(b662e523-ecd3-47d8-8489-153b58632cbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69c24e45e984a5cd3b452fa8fc4875cfaae653a678778c1b038ddec3589cbb80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tdwlc" podUID="b662e523-ecd3-47d8-8489-153b58632cbe" Nov 4 04:58:48.767325 containerd[1611]: time="2025-11-04T04:58:48.767053089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-gqz2h,Uid:5ad3df36-c874-47aa-a593-08839096e8e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ef23237b8bec2d57820c96e5b9f4b40f3b270e75d22a8dd28b89c95ec5d99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.768340 kubelet[2838]: E1104 04:58:48.767521 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ef23237b8bec2d57820c96e5b9f4b40f3b270e75d22a8dd28b89c95ec5d99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.768340 kubelet[2838]: E1104 04:58:48.767648 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ef23237b8bec2d57820c96e5b9f4b40f3b270e75d22a8dd28b89c95ec5d99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" Nov 4 04:58:48.768340 kubelet[2838]: E1104 04:58:48.767753 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ef23237b8bec2d57820c96e5b9f4b40f3b270e75d22a8dd28b89c95ec5d99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" Nov 4 04:58:48.768521 kubelet[2838]: E1104 04:58:48.767846 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77f5f6cfbf-gqz2h_calico-apiserver(5ad3df36-c874-47aa-a593-08839096e8e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77f5f6cfbf-gqz2h_calico-apiserver(5ad3df36-c874-47aa-a593-08839096e8e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33ef23237b8bec2d57820c96e5b9f4b40f3b270e75d22a8dd28b89c95ec5d99b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 04:58:48.823069 containerd[1611]: time="2025-11-04T04:58:48.822959779Z" level=error msg="Failed to destroy network for sandbox \"f94d9fd7372d973ecb393497ba4acc48d8ab4cabfe07b7d45fc3c47e7e133337\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.842344 containerd[1611]: time="2025-11-04T04:58:48.842283153Z" level=error msg="Failed to destroy network for sandbox \"caac18c1018c129438a02dd3267ca995580f2b02e88cdf79dd0e72fa2d665864\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.846008 containerd[1611]: time="2025-11-04T04:58:48.845718095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckpk9,Uid:898460cc-e838-40bf-8726-99fe8a847f0f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94d9fd7372d973ecb393497ba4acc48d8ab4cabfe07b7d45fc3c47e7e133337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.859130 containerd[1611]: time="2025-11-04T04:58:48.859047706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-t46fb,Uid:52545a29-818f-419d-b4f9-3a5f212c18e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac18c1018c129438a02dd3267ca995580f2b02e88cdf79dd0e72fa2d665864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.861766 containerd[1611]: time="2025-11-04T04:58:48.861716252Z" level=error msg="Failed to destroy network for sandbox \"5c238edb8dfd1987acd5a05d63b071bd730c9c234903fe619ca58c9ff514b539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.864941 containerd[1611]: time="2025-11-04T04:58:48.864882641Z" level=error msg="Failed to destroy network for sandbox \"7074baca0274ebe179d9a8a3b97d6fda4ce07690fcd10f09112a4d8ccfb6abca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.868514 containerd[1611]: time="2025-11-04T04:58:48.868444862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r89mn,Uid:d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c238edb8dfd1987acd5a05d63b071bd730c9c234903fe619ca58c9ff514b539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.875577 containerd[1611]: time="2025-11-04T04:58:48.875504533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b5f8584f-qczbm,Uid:1c74c150-52d9-4d9d-b4fe-59734b73de89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7074baca0274ebe179d9a8a3b97d6fda4ce07690fcd10f09112a4d8ccfb6abca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.883325 kubelet[2838]: E1104 04:58:48.883260 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94d9fd7372d973ecb393497ba4acc48d8ab4cabfe07b7d45fc3c47e7e133337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.883532 kubelet[2838]: E1104 04:58:48.883259 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7074baca0274ebe179d9a8a3b97d6fda4ce07690fcd10f09112a4d8ccfb6abca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.883532 kubelet[2838]: E1104 04:58:48.883377 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7074baca0274ebe179d9a8a3b97d6fda4ce07690fcd10f09112a4d8ccfb6abca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" Nov 4 04:58:48.883532 kubelet[2838]: E1104 04:58:48.883403 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7074baca0274ebe179d9a8a3b97d6fda4ce07690fcd10f09112a4d8ccfb6abca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" Nov 4 04:58:48.883685 kubelet[2838]: E1104 04:58:48.883460 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86b5f8584f-qczbm_calico-system(1c74c150-52d9-4d9d-b4fe-59734b73de89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86b5f8584f-qczbm_calico-system(1c74c150-52d9-4d9d-b4fe-59734b73de89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7074baca0274ebe179d9a8a3b97d6fda4ce07690fcd10f09112a4d8ccfb6abca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 04:58:48.883851 kubelet[2838]: E1104 04:58:48.883346 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94d9fd7372d973ecb393497ba4acc48d8ab4cabfe07b7d45fc3c47e7e133337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ckpk9" Nov 4 04:58:48.883851 kubelet[2838]: E1104 04:58:48.883262 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac18c1018c129438a02dd3267ca995580f2b02e88cdf79dd0e72fa2d665864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.883851 kubelet[2838]: E1104 04:58:48.883275 2838 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c238edb8dfd1987acd5a05d63b071bd730c9c234903fe619ca58c9ff514b539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:58:48.884164 kubelet[2838]: E1104 04:58:48.883886 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c238edb8dfd1987acd5a05d63b071bd730c9c234903fe619ca58c9ff514b539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r89mn" Nov 4 04:58:48.884164 kubelet[2838]: E1104 04:58:48.883910 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c238edb8dfd1987acd5a05d63b071bd730c9c234903fe619ca58c9ff514b539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r89mn" Nov 4 04:58:48.884164 kubelet[2838]: E1104 04:58:48.883805 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94d9fd7372d973ecb393497ba4acc48d8ab4cabfe07b7d45fc3c47e7e133337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ckpk9" Nov 4 04:58:48.884318 kubelet[2838]: E1104 04:58:48.883949 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-r89mn_calico-system(d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-r89mn_calico-system(d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c238edb8dfd1987acd5a05d63b071bd730c9c234903fe619ca58c9ff514b539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 04:58:48.884318 kubelet[2838]: E1104 04:58:48.884006 2838 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac18c1018c129438a02dd3267ca995580f2b02e88cdf79dd0e72fa2d665864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" Nov 4 04:58:48.884318 kubelet[2838]: E1104 04:58:48.884012 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ckpk9_kube-system(898460cc-e838-40bf-8726-99fe8a847f0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ckpk9_kube-system(898460cc-e838-40bf-8726-99fe8a847f0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f94d9fd7372d973ecb393497ba4acc48d8ab4cabfe07b7d45fc3c47e7e133337\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ckpk9" podUID="898460cc-e838-40bf-8726-99fe8a847f0f" Nov 4 04:58:48.884469 kubelet[2838]: E1104 04:58:48.884025 2838 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac18c1018c129438a02dd3267ca995580f2b02e88cdf79dd0e72fa2d665864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" Nov 4 04:58:48.884469 kubelet[2838]: E1104 04:58:48.884063 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77f5f6cfbf-t46fb_calico-apiserver(52545a29-818f-419d-b4f9-3a5f212c18e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77f5f6cfbf-t46fb_calico-apiserver(52545a29-818f-419d-b4f9-3a5f212c18e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caac18c1018c129438a02dd3267ca995580f2b02e88cdf79dd0e72fa2d665864\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:58:48.945245 systemd[1]: run-netns-cni\x2dc1f16064\x2dbb19\x2db104\x2df89e\x2d1d4ed2f65ac6.mount: Deactivated successfully. Nov 4 04:58:48.945430 systemd[1]: run-netns-cni\x2dd2355f9c\x2dcb3a\x2dc8f3\x2d0ff2\x2df88425364517.mount: Deactivated successfully. Nov 4 04:58:57.721207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248125127.mount: Deactivated successfully. Nov 4 04:58:58.903563 containerd[1611]: time="2025-11-04T04:58:58.903476859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:58.904585 containerd[1611]: time="2025-11-04T04:58:58.904551500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 4 04:58:58.906226 containerd[1611]: time="2025-11-04T04:58:58.906143076Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:58.911569 containerd[1611]: time="2025-11-04T04:58:58.911521819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:58.912217 containerd[1611]: time="2025-11-04T04:58:58.912179203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.436225218s" Nov 4 04:58:58.912269 containerd[1611]: time="2025-11-04T04:58:58.912216895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 04:58:58.924366 containerd[1611]: time="2025-11-04T04:58:58.924309499Z" level=info msg="CreateContainer within sandbox \"12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 04:58:58.936732 containerd[1611]: time="2025-11-04T04:58:58.936663431Z" level=info msg="Container c2c45d1d8ff291e092324d598e89a66baa421fbfea6bd5d0a6ecad6903cd8f2a: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:58.951941 containerd[1611]: time="2025-11-04T04:58:58.951875657Z" level=info msg="CreateContainer within sandbox \"12dfdabcca91531b6dbef915c62d62af6550c11e57428509ef347254e0d8d271\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2c45d1d8ff291e092324d598e89a66baa421fbfea6bd5d0a6ecad6903cd8f2a\"" Nov 4 04:58:58.952669 containerd[1611]: time="2025-11-04T04:58:58.952635266Z" level=info msg="StartContainer for \"c2c45d1d8ff291e092324d598e89a66baa421fbfea6bd5d0a6ecad6903cd8f2a\"" Nov 4 04:58:58.958728 containerd[1611]: time="2025-11-04T04:58:58.958186388Z" level=info msg="connecting to shim c2c45d1d8ff291e092324d598e89a66baa421fbfea6bd5d0a6ecad6903cd8f2a" address="unix:///run/containerd/s/fdd5af401cb4c0b33006e5d6490a1b6118118dceb18dae941feb738b2826e6a5" protocol=ttrpc version=3 Nov 4 04:58:58.985813 systemd[1]: Started cri-containerd-c2c45d1d8ff291e092324d598e89a66baa421fbfea6bd5d0a6ecad6903cd8f2a.scope - libcontainer container c2c45d1d8ff291e092324d598e89a66baa421fbfea6bd5d0a6ecad6903cd8f2a. Nov 4 04:58:59.124662 containerd[1611]: time="2025-11-04T04:58:59.121758270Z" level=info msg="StartContainer for \"c2c45d1d8ff291e092324d598e89a66baa421fbfea6bd5d0a6ecad6903cd8f2a\" returns successfully" Nov 4 04:58:59.131391 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 04:58:59.132661 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 04:58:59.321857 kubelet[2838]: I1104 04:58:59.321331 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd5x6\" (UniqueName: \"kubernetes.io/projected/064e64ed-c9da-415f-83fd-b39b97fd06e6-kube-api-access-kd5x6\") pod \"064e64ed-c9da-415f-83fd-b39b97fd06e6\" (UID: \"064e64ed-c9da-415f-83fd-b39b97fd06e6\") " Nov 4 04:58:59.321857 kubelet[2838]: I1104 04:58:59.321772 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-ca-bundle\") pod \"064e64ed-c9da-415f-83fd-b39b97fd06e6\" (UID: \"064e64ed-c9da-415f-83fd-b39b97fd06e6\") " Nov 4 04:58:59.321857 kubelet[2838]: I1104 04:58:59.321816 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-backend-key-pair\") pod \"064e64ed-c9da-415f-83fd-b39b97fd06e6\" (UID: \"064e64ed-c9da-415f-83fd-b39b97fd06e6\") " Nov 4 04:58:59.325684 kubelet[2838]: I1104 04:58:59.324977 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "064e64ed-c9da-415f-83fd-b39b97fd06e6" (UID: "064e64ed-c9da-415f-83fd-b39b97fd06e6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 04:58:59.330001 kubelet[2838]: I1104 04:58:59.329946 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064e64ed-c9da-415f-83fd-b39b97fd06e6-kube-api-access-kd5x6" (OuterVolumeSpecName: "kube-api-access-kd5x6") pod "064e64ed-c9da-415f-83fd-b39b97fd06e6" (UID: "064e64ed-c9da-415f-83fd-b39b97fd06e6"). InnerVolumeSpecName "kube-api-access-kd5x6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 04:58:59.331733 kubelet[2838]: I1104 04:58:59.331681 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "064e64ed-c9da-415f-83fd-b39b97fd06e6" (UID: "064e64ed-c9da-415f-83fd-b39b97fd06e6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 04:58:59.385271 kubelet[2838]: E1104 04:58:59.385229 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:59.395035 systemd[1]: Removed slice kubepods-besteffort-pod064e64ed_c9da_415f_83fd_b39b97fd06e6.slice - libcontainer container kubepods-besteffort-pod064e64ed_c9da_415f_83fd_b39b97fd06e6.slice. Nov 4 04:58:59.404260 kubelet[2838]: I1104 04:58:59.404185 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sl8xh" podStartSLOduration=1.317650325 podStartE2EDuration="26.40415983s" podCreationTimestamp="2025-11-04 04:58:33 +0000 UTC" firstStartedPulling="2025-11-04 04:58:33.826465477 +0000 UTC m=+22.809997624" lastFinishedPulling="2025-11-04 04:58:58.912974982 +0000 UTC m=+47.896507129" observedRunningTime="2025-11-04 04:58:59.403545759 +0000 UTC m=+48.387077906" watchObservedRunningTime="2025-11-04 04:58:59.40415983 +0000 UTC m=+48.387691967" Nov 4 04:58:59.422855 kubelet[2838]: I1104 04:58:59.422790 2838 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 04:58:59.422855 kubelet[2838]: I1104 04:58:59.422840 2838 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/064e64ed-c9da-415f-83fd-b39b97fd06e6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 04:58:59.422855 kubelet[2838]: I1104 04:58:59.422853 2838 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kd5x6\" (UniqueName: \"kubernetes.io/projected/064e64ed-c9da-415f-83fd-b39b97fd06e6-kube-api-access-kd5x6\") on node \"localhost\" DevicePath \"\"" Nov 4 04:58:59.692520 systemd[1]: Created slice kubepods-besteffort-pod60e4d4ba_e12d_4bb4_b4da_ea310004d8fa.slice - libcontainer container kubepods-besteffort-pod60e4d4ba_e12d_4bb4_b4da_ea310004d8fa.slice. Nov 4 04:58:59.725699 kubelet[2838]: I1104 04:58:59.725536 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60e4d4ba-e12d-4bb4-b4da-ea310004d8fa-whisker-backend-key-pair\") pod \"whisker-6d9bd49888-82v9k\" (UID: \"60e4d4ba-e12d-4bb4-b4da-ea310004d8fa\") " pod="calico-system/whisker-6d9bd49888-82v9k" Nov 4 04:58:59.726116 kubelet[2838]: I1104 04:58:59.725600 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkbf\" (UniqueName: \"kubernetes.io/projected/60e4d4ba-e12d-4bb4-b4da-ea310004d8fa-kube-api-access-5gkbf\") pod \"whisker-6d9bd49888-82v9k\" (UID: \"60e4d4ba-e12d-4bb4-b4da-ea310004d8fa\") " pod="calico-system/whisker-6d9bd49888-82v9k" Nov 4 04:58:59.726116 kubelet[2838]: I1104 04:58:59.726004 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60e4d4ba-e12d-4bb4-b4da-ea310004d8fa-whisker-ca-bundle\") pod \"whisker-6d9bd49888-82v9k\" (UID: \"60e4d4ba-e12d-4bb4-b4da-ea310004d8fa\") " pod="calico-system/whisker-6d9bd49888-82v9k" Nov 4 04:58:59.921204 systemd[1]: var-lib-kubelet-pods-064e64ed\x2dc9da\x2d415f\x2d83fd\x2db39b97fd06e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkd5x6.mount: Deactivated successfully. Nov 4 04:58:59.921341 systemd[1]: var-lib-kubelet-pods-064e64ed\x2dc9da\x2d415f\x2d83fd\x2db39b97fd06e6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 04:58:59.999947 containerd[1611]: time="2025-11-04T04:58:59.999805618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9bd49888-82v9k,Uid:60e4d4ba-e12d-4bb4-b4da-ea310004d8fa,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:00.132788 containerd[1611]: time="2025-11-04T04:59:00.132713247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b5f8584f-qczbm,Uid:1c74c150-52d9-4d9d-b4fe-59734b73de89,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:00.133202 kubelet[2838]: E1104 04:59:00.133164 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:00.134784 containerd[1611]: time="2025-11-04T04:59:00.134749659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckpk9,Uid:898460cc-e838-40bf-8726-99fe8a847f0f,Namespace:kube-system,Attempt:0,}" Nov 4 04:59:00.135066 containerd[1611]: time="2025-11-04T04:59:00.134705744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r89mn,Uid:d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:00.202482 systemd-networkd[1516]: cali0b6165ab623: Link UP Nov 4 04:59:00.202841 systemd-networkd[1516]: cali0b6165ab623: Gained carrier Nov 4 04:59:00.227683 containerd[1611]: 2025-11-04 04:59:00.033 [INFO][4004] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 04:59:00.227683 containerd[1611]: 2025-11-04 04:59:00.058 [INFO][4004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d9bd49888--82v9k-eth0 whisker-6d9bd49888- calico-system 60e4d4ba-e12d-4bb4-b4da-ea310004d8fa 936 0 2025-11-04 04:58:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d9bd49888 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d9bd49888-82v9k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0b6165ab623 [] [] }} ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-" Nov 4 04:59:00.227683 containerd[1611]: 2025-11-04 04:59:00.058 [INFO][4004] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" Nov 4 04:59:00.227683 containerd[1611]: 2025-11-04 04:59:00.124 [INFO][4018] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" HandleID="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Workload="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.125 [INFO][4018] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" HandleID="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Workload="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00070b8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d9bd49888-82v9k", "timestamp":"2025-11-04 04:59:00.124854496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.125 [INFO][4018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.125 [INFO][4018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.126 [INFO][4018] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.134 [INFO][4018] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" host="localhost" Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.141 [INFO][4018] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.148 [INFO][4018] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.157 [INFO][4018] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.166 [INFO][4018] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.228000 containerd[1611]: 2025-11-04 04:59:00.166 [INFO][4018] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" host="localhost" Nov 4 04:59:00.228243 containerd[1611]: 2025-11-04 04:59:00.168 [INFO][4018] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd Nov 4 04:59:00.228243 containerd[1611]: 2025-11-04 04:59:00.174 [INFO][4018] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" host="localhost" Nov 4 04:59:00.228243 containerd[1611]: 2025-11-04 04:59:00.182 [INFO][4018] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" host="localhost" Nov 4 04:59:00.228243 containerd[1611]: 2025-11-04 04:59:00.183 [INFO][4018] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" host="localhost" Nov 4 04:59:00.228243 containerd[1611]: 2025-11-04 04:59:00.183 [INFO][4018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:00.228243 containerd[1611]: 2025-11-04 04:59:00.183 [INFO][4018] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" HandleID="k8s-pod-network.61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Workload="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" Nov 4 04:59:00.228375 containerd[1611]: 2025-11-04 04:59:00.187 [INFO][4004] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d9bd49888--82v9k-eth0", GenerateName:"whisker-6d9bd49888-", Namespace:"calico-system", SelfLink:"", UID:"60e4d4ba-e12d-4bb4-b4da-ea310004d8fa", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d9bd49888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d9bd49888-82v9k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0b6165ab623", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.228375 containerd[1611]: 2025-11-04 04:59:00.188 [INFO][4004] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" Nov 4 04:59:00.228497 containerd[1611]: 2025-11-04 04:59:00.188 [INFO][4004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b6165ab623 ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" Nov 4 04:59:00.228497 containerd[1611]: 2025-11-04 04:59:00.200 [INFO][4004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" Nov 4 04:59:00.228549 containerd[1611]: 2025-11-04 04:59:00.205 [INFO][4004] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d9bd49888--82v9k-eth0", GenerateName:"whisker-6d9bd49888-", Namespace:"calico-system", SelfLink:"", UID:"60e4d4ba-e12d-4bb4-b4da-ea310004d8fa", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d9bd49888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd", Pod:"whisker-6d9bd49888-82v9k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0b6165ab623", MAC:"8e:01:be:60:59:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.228600 containerd[1611]: 2025-11-04 04:59:00.222 [INFO][4004] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" Namespace="calico-system" Pod="whisker-6d9bd49888-82v9k" WorkloadEndpoint="localhost-k8s-whisker--6d9bd49888--82v9k-eth0" Nov 4 04:59:00.498742 systemd-networkd[1516]: calic6adc8bc226: Link UP Nov 4 04:59:00.500706 systemd-networkd[1516]: calic6adc8bc226: Gained carrier Nov 4 04:59:00.537780 containerd[1611]: time="2025-11-04T04:59:00.537714352Z" level=info msg="connecting to shim 61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd" address="unix:///run/containerd/s/c7cc9de8e33fd95a432a940f7cf5ea5b81aa752cfb1d34d099e5acba2af08e34" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:00.566061 systemd[1]: Started cri-containerd-61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd.scope - libcontainer container 61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd. Nov 4 04:59:00.569005 containerd[1611]: 2025-11-04 04:59:00.186 [INFO][4039] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 04:59:00.569005 containerd[1611]: 2025-11-04 04:59:00.204 [INFO][4039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0 calico-kube-controllers-86b5f8584f- calico-system 1c74c150-52d9-4d9d-b4fe-59734b73de89 866 0 2025-11-04 04:58:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86b5f8584f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86b5f8584f-qczbm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic6adc8bc226 [] [] }} ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-" Nov 4 04:59:00.569005 containerd[1611]: 2025-11-04 04:59:00.204 [INFO][4039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" Nov 4 04:59:00.569005 containerd[1611]: 2025-11-04 04:59:00.275 [INFO][4076] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" HandleID="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Workload="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.275 [INFO][4076] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" HandleID="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Workload="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86b5f8584f-qczbm", "timestamp":"2025-11-04 04:59:00.275398701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.275 [INFO][4076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.276 [INFO][4076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.276 [INFO][4076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.313 [INFO][4076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" host="localhost" Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.325 [INFO][4076] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.332 [INFO][4076] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.335 [INFO][4076] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.339 [INFO][4076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.569256 containerd[1611]: 2025-11-04 04:59:00.339 [INFO][4076] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" host="localhost" Nov 4 04:59:00.569600 containerd[1611]: 2025-11-04 04:59:00.341 [INFO][4076] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726 Nov 4 04:59:00.569600 containerd[1611]: 2025-11-04 04:59:00.351 [INFO][4076] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" host="localhost" Nov 4 04:59:00.569600 containerd[1611]: 2025-11-04 04:59:00.489 [INFO][4076] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" host="localhost" Nov 4 04:59:00.569600 containerd[1611]: 2025-11-04 04:59:00.489 [INFO][4076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" host="localhost" Nov 4 04:59:00.569600 containerd[1611]: 2025-11-04 04:59:00.490 [INFO][4076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:00.569600 containerd[1611]: 2025-11-04 04:59:00.490 [INFO][4076] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" HandleID="k8s-pod-network.7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Workload="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" Nov 4 04:59:00.570226 containerd[1611]: 2025-11-04 04:59:00.494 [INFO][4039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0", GenerateName:"calico-kube-controllers-86b5f8584f-", Namespace:"calico-system", SelfLink:"", UID:"1c74c150-52d9-4d9d-b4fe-59734b73de89", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b5f8584f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86b5f8584f-qczbm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6adc8bc226", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.570330 containerd[1611]: 2025-11-04 04:59:00.495 [INFO][4039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" Nov 4 04:59:00.570330 containerd[1611]: 2025-11-04 04:59:00.495 [INFO][4039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6adc8bc226 ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" Nov 4 04:59:00.570330 containerd[1611]: 2025-11-04 04:59:00.500 [INFO][4039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" Nov 4 04:59:00.570430 containerd[1611]: 2025-11-04 04:59:00.501 [INFO][4039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0", GenerateName:"calico-kube-controllers-86b5f8584f-", Namespace:"calico-system", SelfLink:"", UID:"1c74c150-52d9-4d9d-b4fe-59734b73de89", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b5f8584f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726", Pod:"calico-kube-controllers-86b5f8584f-qczbm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6adc8bc226", MAC:"b2:59:38:e2:9a:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.570587 containerd[1611]: 2025-11-04 04:59:00.562 [INFO][4039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" Namespace="calico-system" Pod="calico-kube-controllers-86b5f8584f-qczbm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b5f8584f--qczbm-eth0" Nov 4 04:59:00.592960 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:00.692863 containerd[1611]: time="2025-11-04T04:59:00.692284993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9bd49888-82v9k,Uid:60e4d4ba-e12d-4bb4-b4da-ea310004d8fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"61cb8d8a5828a49763c38a09d1302804057d79b04d47f397a7ccfad04882e8dd\"" Nov 4 04:59:00.697046 containerd[1611]: time="2025-11-04T04:59:00.696685951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:59:00.708085 systemd-networkd[1516]: cali22c6e6f9650: Link UP Nov 4 04:59:00.708375 systemd-networkd[1516]: cali22c6e6f9650: Gained carrier Nov 4 04:59:00.728754 containerd[1611]: 2025-11-04 04:59:00.197 [INFO][4045] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 04:59:00.728754 containerd[1611]: 2025-11-04 04:59:00.221 [INFO][4045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--r89mn-eth0 goldmane-666569f655- calico-system d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6 865 0 2025-11-04 04:58:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-r89mn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali22c6e6f9650 [] [] }} ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-" Nov 4 04:59:00.728754 containerd[1611]: 2025-11-04 04:59:00.221 [INFO][4045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-eth0" Nov 4 04:59:00.728754 containerd[1611]: 2025-11-04 04:59:00.293 [INFO][4088] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" HandleID="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Workload="localhost-k8s-goldmane--666569f655--r89mn-eth0" Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.294 [INFO][4088] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" HandleID="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Workload="localhost-k8s-goldmane--666569f655--r89mn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d54c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-r89mn", "timestamp":"2025-11-04 04:59:00.293589247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.294 [INFO][4088] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.490 [INFO][4088] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.490 [INFO][4088] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.502 [INFO][4088] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" host="localhost" Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.562 [INFO][4088] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.576 [INFO][4088] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.580 [INFO][4088] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.584 [INFO][4088] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.729274 containerd[1611]: 2025-11-04 04:59:00.584 [INFO][4088] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" host="localhost" Nov 4 04:59:00.729643 containerd[1611]: 2025-11-04 04:59:00.585 [INFO][4088] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8 Nov 4 04:59:00.729643 containerd[1611]: 2025-11-04 04:59:00.645 [INFO][4088] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" host="localhost" Nov 4 04:59:00.729643 containerd[1611]: 2025-11-04 04:59:00.695 [INFO][4088] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" host="localhost" Nov 4 04:59:00.729643 containerd[1611]: 2025-11-04 04:59:00.695 [INFO][4088] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" host="localhost" Nov 4 04:59:00.729643 containerd[1611]: 2025-11-04 04:59:00.697 [INFO][4088] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:00.729643 containerd[1611]: 2025-11-04 04:59:00.697 [INFO][4088] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" HandleID="k8s-pod-network.aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Workload="localhost-k8s-goldmane--666569f655--r89mn-eth0" Nov 4 04:59:00.730035 containerd[1611]: 2025-11-04 04:59:00.701 [INFO][4045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r89mn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-r89mn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22c6e6f9650", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.730035 containerd[1611]: 2025-11-04 04:59:00.701 [INFO][4045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-eth0" Nov 4 04:59:00.730156 containerd[1611]: 2025-11-04 04:59:00.701 [INFO][4045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22c6e6f9650 ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-eth0" Nov 4 04:59:00.730156 containerd[1611]: 2025-11-04 04:59:00.707 [INFO][4045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-eth0" Nov 4 04:59:00.730227 containerd[1611]: 2025-11-04 04:59:00.708 [INFO][4045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r89mn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8", Pod:"goldmane-666569f655-r89mn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22c6e6f9650", MAC:"7a:00:14:59:bb:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.730310 containerd[1611]: 2025-11-04 04:59:00.723 [INFO][4045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" Namespace="calico-system" Pod="goldmane-666569f655-r89mn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r89mn-eth0" Nov 4 04:59:00.736600 containerd[1611]: time="2025-11-04T04:59:00.736530435Z" level=info msg="connecting to shim 7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726" address="unix:///run/containerd/s/4c5fb12a5ad46f9ef39a1a84d8cd106650f05c8fe3273a08cd36dc1852d424d0" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:00.771895 systemd-networkd[1516]: califfab88bfef0: Link UP Nov 4 04:59:00.772160 systemd-networkd[1516]: califfab88bfef0: Gained carrier Nov 4 04:59:00.806435 containerd[1611]: 2025-11-04 04:59:00.193 [INFO][4025] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 04:59:00.806435 containerd[1611]: 2025-11-04 04:59:00.223 [INFO][4025] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0 coredns-674b8bbfcf- kube-system 898460cc-e838-40bf-8726-99fe8a847f0f 859 0 2025-11-04 04:58:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ckpk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califfab88bfef0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-" Nov 4 04:59:00.806435 containerd[1611]: 2025-11-04 04:59:00.223 [INFO][4025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" Nov 4 04:59:00.806435 containerd[1611]: 2025-11-04 04:59:00.293 [INFO][4086] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" HandleID="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Workload="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.294 [INFO][4086] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" HandleID="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Workload="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ckpk9", "timestamp":"2025-11-04 04:59:00.293750434 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.294 [INFO][4086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.696 [INFO][4086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.696 [INFO][4086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.707 [INFO][4086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" host="localhost" Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.716 [INFO][4086] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.727 [INFO][4086] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.730 [INFO][4086] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.735 [INFO][4086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:00.806773 containerd[1611]: 2025-11-04 04:59:00.735 [INFO][4086] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" host="localhost" Nov 4 04:59:00.807129 containerd[1611]: 2025-11-04 04:59:00.737 [INFO][4086] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b Nov 4 04:59:00.807129 containerd[1611]: 2025-11-04 04:59:00.743 [INFO][4086] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" host="localhost" Nov 4 04:59:00.807129 containerd[1611]: 2025-11-04 04:59:00.753 [INFO][4086] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" host="localhost" Nov 4 04:59:00.807129 containerd[1611]: 2025-11-04 04:59:00.753 [INFO][4086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" host="localhost" Nov 4 04:59:00.807129 containerd[1611]: 2025-11-04 04:59:00.753 [INFO][4086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:00.807129 containerd[1611]: 2025-11-04 04:59:00.753 [INFO][4086] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" HandleID="k8s-pod-network.e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Workload="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" Nov 4 04:59:00.808949 containerd[1611]: 2025-11-04 04:59:00.760 [INFO][4025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"898460cc-e838-40bf-8726-99fe8a847f0f", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ckpk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfab88bfef0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.809258 containerd[1611]: 2025-11-04 04:59:00.760 [INFO][4025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" Nov 4 04:59:00.809258 containerd[1611]: 2025-11-04 04:59:00.760 [INFO][4025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfab88bfef0 ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" Nov 4 04:59:00.809258 containerd[1611]: 2025-11-04 04:59:00.766 [INFO][4025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" Nov 4 04:59:00.809529 containerd[1611]: 2025-11-04 04:59:00.768 [INFO][4025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"898460cc-e838-40bf-8726-99fe8a847f0f", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b", Pod:"coredns-674b8bbfcf-ckpk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfab88bfef0", MAC:"d2:4a:ec:54:ae:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:00.809529 containerd[1611]: 2025-11-04 04:59:00.793 [INFO][4025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckpk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ckpk9-eth0" Nov 4 04:59:00.812934 systemd[1]: Started cri-containerd-7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726.scope - libcontainer container 7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726. Nov 4 04:59:00.837792 containerd[1611]: time="2025-11-04T04:59:00.837709678Z" level=info msg="connecting to shim aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8" address="unix:///run/containerd/s/aa042c82731ed4588a9cd9d6d6905d44590f97fa7f8ab8d20d2467516881d858" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:00.903901 systemd[1]: Started cri-containerd-aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8.scope - libcontainer container aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8. Nov 4 04:59:00.910926 kubelet[2838]: I1104 04:59:00.910859 2838 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:59:00.915244 kubelet[2838]: E1104 04:59:00.913945 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:00.966331 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:00.976071 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:00.984222 containerd[1611]: time="2025-11-04T04:59:00.984133600Z" level=info msg="connecting to shim e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b" address="unix:///run/containerd/s/aea25e8a11faeb859ce3aa1ca183614710d67f31f2a7156eb92b3e00ae1b9d71" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:01.053947 containerd[1611]: time="2025-11-04T04:59:01.053585799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:01.060260 containerd[1611]: time="2025-11-04T04:59:01.059820188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:59:01.061706 containerd[1611]: time="2025-11-04T04:59:01.059820669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:01.061816 kubelet[2838]: E1104 04:59:01.060922 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:01.061816 kubelet[2838]: E1104 04:59:01.060987 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:01.071977 kubelet[2838]: E1104 04:59:01.070365 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:406f63172baa4248bd0f136be65aaf62,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5gkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d9bd49888-82v9k_calico-system(60e4d4ba-e12d-4bb4-b4da-ea310004d8fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:01.073710 containerd[1611]: time="2025-11-04T04:59:01.073455379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:59:01.077263 systemd[1]: Started cri-containerd-e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b.scope - libcontainer container e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b. Nov 4 04:59:01.123705 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:01.139251 kubelet[2838]: I1104 04:59:01.139159 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064e64ed-c9da-415f-83fd-b39b97fd06e6" path="/var/lib/kubelet/pods/064e64ed-c9da-415f-83fd-b39b97fd06e6/volumes" Nov 4 04:59:01.151929 containerd[1611]: time="2025-11-04T04:59:01.151803343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b5f8584f-qczbm,Uid:1c74c150-52d9-4d9d-b4fe-59734b73de89,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ccccd2e84cc738d64a5fad2f1defeb7de8dc378b46e28137d909842177ff726\"" Nov 4 04:59:01.164768 containerd[1611]: time="2025-11-04T04:59:01.164695648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r89mn,Uid:d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"aca9adef8efddfceac7ffc0e0f2803b36850e8dcb5b33f5e7e66f46d32a280d8\"" Nov 4 04:59:01.362435 containerd[1611]: time="2025-11-04T04:59:01.362339979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckpk9,Uid:898460cc-e838-40bf-8726-99fe8a847f0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b\"" Nov 4 04:59:01.364583 kubelet[2838]: E1104 04:59:01.364515 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:01.411920 containerd[1611]: time="2025-11-04T04:59:01.411871567Z" level=info msg="CreateContainer within sandbox \"e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:59:01.426660 containerd[1611]: time="2025-11-04T04:59:01.426573168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:01.435581 containerd[1611]: time="2025-11-04T04:59:01.435514255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:59:01.435581 containerd[1611]: time="2025-11-04T04:59:01.435660624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:01.436127 kubelet[2838]: E1104 04:59:01.436061 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:01.436331 kubelet[2838]: E1104 04:59:01.436236 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:01.436945 containerd[1611]: time="2025-11-04T04:59:01.436874697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:59:01.437151 kubelet[2838]: E1104 04:59:01.436978 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d9bd49888-82v9k_calico-system(60e4d4ba-e12d-4bb4-b4da-ea310004d8fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:01.438307 kubelet[2838]: E1104 04:59:01.438203 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d9bd49888-82v9k" podUID="60e4d4ba-e12d-4bb4-b4da-ea310004d8fa" Nov 4 04:59:01.485640 containerd[1611]: time="2025-11-04T04:59:01.485086330Z" level=info msg="Container 975a2b96c313abaa7c6b1f74366fba37c888313117afff7143498bd87d2870b5: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:59:01.485229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625092403.mount: Deactivated successfully. Nov 4 04:59:01.493484 containerd[1611]: time="2025-11-04T04:59:01.493420450Z" level=info msg="CreateContainer within sandbox \"e68c08504adb10e5eb87e582fbda97f1779466899d9c0b329573193b7ac4085b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"975a2b96c313abaa7c6b1f74366fba37c888313117afff7143498bd87d2870b5\"" Nov 4 04:59:01.494278 containerd[1611]: time="2025-11-04T04:59:01.494237506Z" level=info msg="StartContainer for \"975a2b96c313abaa7c6b1f74366fba37c888313117afff7143498bd87d2870b5\"" Nov 4 04:59:01.495693 containerd[1611]: time="2025-11-04T04:59:01.495656741Z" level=info msg="connecting to shim 975a2b96c313abaa7c6b1f74366fba37c888313117afff7143498bd87d2870b5" address="unix:///run/containerd/s/aea25e8a11faeb859ce3aa1ca183614710d67f31f2a7156eb92b3e00ae1b9d71" protocol=ttrpc version=3 Nov 4 04:59:01.517859 systemd[1]: Started cri-containerd-975a2b96c313abaa7c6b1f74366fba37c888313117afff7143498bd87d2870b5.scope - libcontainer container 975a2b96c313abaa7c6b1f74366fba37c888313117afff7143498bd87d2870b5. Nov 4 04:59:01.560235 containerd[1611]: time="2025-11-04T04:59:01.560181056Z" level=info msg="StartContainer for \"975a2b96c313abaa7c6b1f74366fba37c888313117afff7143498bd87d2870b5\" returns successfully" Nov 4 04:59:01.671820 systemd-networkd[1516]: vxlan.calico: Link UP Nov 4 04:59:01.671833 systemd-networkd[1516]: vxlan.calico: Gained carrier Nov 4 04:59:01.757045 systemd-networkd[1516]: calic6adc8bc226: Gained IPv6LL Nov 4 04:59:01.758755 containerd[1611]: time="2025-11-04T04:59:01.758671188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:01.764943 containerd[1611]: time="2025-11-04T04:59:01.764699906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:59:01.764943 containerd[1611]: time="2025-11-04T04:59:01.764763006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:01.766013 kubelet[2838]: E1104 04:59:01.765076 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:01.766013 kubelet[2838]: E1104 04:59:01.765137 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:01.766013 kubelet[2838]: E1104 04:59:01.765392 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2stn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86b5f8584f-qczbm_calico-system(1c74c150-52d9-4d9d-b4fe-59734b73de89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:01.767133 kubelet[2838]: E1104 04:59:01.766563 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 04:59:01.767316 containerd[1611]: time="2025-11-04T04:59:01.767287094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:59:01.884879 systemd-networkd[1516]: cali0b6165ab623: Gained IPv6LL Nov 4 04:59:02.013868 systemd-networkd[1516]: califfab88bfef0: Gained IPv6LL Nov 4 04:59:02.132554 containerd[1611]: time="2025-11-04T04:59:02.132478291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnpcs,Uid:d1afccb9-55ee-4f50-a636-3c55f302f219,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:02.207902 containerd[1611]: time="2025-11-04T04:59:02.207821427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:02.209392 containerd[1611]: time="2025-11-04T04:59:02.209354017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:02.209744 containerd[1611]: time="2025-11-04T04:59:02.209692171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:59:02.210105 kubelet[2838]: E1104 04:59:02.210055 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:02.210536 kubelet[2838]: E1104 04:59:02.210125 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:02.210536 kubelet[2838]: E1104 04:59:02.210312 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r89mn_calico-system(d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:02.212388 kubelet[2838]: E1104 04:59:02.212254 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 04:59:02.396868 systemd-networkd[1516]: cali22c6e6f9650: Gained IPv6LL Nov 4 04:59:02.401732 kubelet[2838]: E1104 04:59:02.401695 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:02.402373 kubelet[2838]: E1104 04:59:02.402341 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 04:59:02.402676 kubelet[2838]: E1104 04:59:02.402640 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d9bd49888-82v9k" podUID="60e4d4ba-e12d-4bb4-b4da-ea310004d8fa" Nov 4 04:59:02.402763 kubelet[2838]: E1104 04:59:02.402730 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 04:59:02.712817 systemd-networkd[1516]: cali3dc986bf6d8: Link UP Nov 4 04:59:02.713369 systemd-networkd[1516]: cali3dc986bf6d8: Gained carrier Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.175 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jnpcs-eth0 csi-node-driver- calico-system d1afccb9-55ee-4f50-a636-3c55f302f219 726 0 2025-11-04 04:58:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jnpcs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3dc986bf6d8 [] [] }} ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.175 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-eth0" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.212 [INFO][4614] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" HandleID="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Workload="localhost-k8s-csi--node--driver--jnpcs-eth0" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.213 [INFO][4614] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" HandleID="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Workload="localhost-k8s-csi--node--driver--jnpcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003243e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jnpcs", "timestamp":"2025-11-04 04:59:02.212527832 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.213 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.213 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.213 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.220 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.226 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.230 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.233 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.235 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.236 [INFO][4614] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.238 [INFO][4614] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.487 [INFO][4614] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.705 [INFO][4614] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.705 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" host="localhost" Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.705 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:02.778367 containerd[1611]: 2025-11-04 04:59:02.705 [INFO][4614] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" HandleID="k8s-pod-network.74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Workload="localhost-k8s-csi--node--driver--jnpcs-eth0" Nov 4 04:59:02.779205 containerd[1611]: 2025-11-04 04:59:02.708 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jnpcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1afccb9-55ee-4f50-a636-3c55f302f219", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jnpcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dc986bf6d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:02.779205 containerd[1611]: 2025-11-04 04:59:02.709 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-eth0" Nov 4 04:59:02.779205 containerd[1611]: 2025-11-04 04:59:02.709 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dc986bf6d8 ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-eth0" Nov 4 04:59:02.779205 containerd[1611]: 2025-11-04 04:59:02.714 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-eth0" Nov 4 04:59:02.779205 containerd[1611]: 2025-11-04 04:59:02.714 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jnpcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1afccb9-55ee-4f50-a636-3c55f302f219", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e", Pod:"csi-node-driver-jnpcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dc986bf6d8", MAC:"2e:ee:d2:bf:d6:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:02.779205 containerd[1611]: 2025-11-04 04:59:02.770 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" Namespace="calico-system" Pod="csi-node-driver-jnpcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--jnpcs-eth0" Nov 4 04:59:02.821815 kubelet[2838]: I1104 04:59:02.821704 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ckpk9" podStartSLOduration=44.821681246 podStartE2EDuration="44.821681246s" podCreationTimestamp="2025-11-04 04:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:59:02.803215367 +0000 UTC m=+51.786747514" watchObservedRunningTime="2025-11-04 04:59:02.821681246 +0000 UTC m=+51.805213393" Nov 4 04:59:02.843142 containerd[1611]: time="2025-11-04T04:59:02.843048041Z" level=info msg="connecting to shim 74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e" address="unix:///run/containerd/s/7f44675a4560089696bff3de4b51ce50fb94b272b2d30de7b6661dce0b9d0fce" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:02.875818 systemd[1]: Started cri-containerd-74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e.scope - libcontainer container 74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e. Nov 4 04:59:02.890263 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:02.908183 containerd[1611]: time="2025-11-04T04:59:02.908133241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnpcs,Uid:d1afccb9-55ee-4f50-a636-3c55f302f219,Namespace:calico-system,Attempt:0,} returns sandbox id \"74e3c8a0267c4dabe498e1cf35103c0e40aeae5b2143736525ac36572323736e\"" Nov 4 04:59:02.910081 containerd[1611]: time="2025-11-04T04:59:02.910029572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:59:02.972880 systemd-networkd[1516]: vxlan.calico: Gained IPv6LL Nov 4 04:59:03.132758 containerd[1611]: time="2025-11-04T04:59:03.132695039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-gqz2h,Uid:5ad3df36-c874-47aa-a593-08839096e8e7,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:59:03.231732 containerd[1611]: time="2025-11-04T04:59:03.231530730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:03.233110 containerd[1611]: time="2025-11-04T04:59:03.233054360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:59:03.233179 containerd[1611]: time="2025-11-04T04:59:03.233127860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:03.233426 kubelet[2838]: E1104 04:59:03.233375 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:03.233889 kubelet[2838]: E1104 04:59:03.233441 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:03.233889 kubelet[2838]: E1104 04:59:03.233637 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bskz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:03.235752 containerd[1611]: time="2025-11-04T04:59:03.235663477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:59:03.250565 systemd-networkd[1516]: caliad9b71fb9ea: Link UP Nov 4 04:59:03.251122 systemd-networkd[1516]: caliad9b71fb9ea: Gained carrier Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.176 [INFO][4681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0 calico-apiserver-77f5f6cfbf- calico-apiserver 5ad3df36-c874-47aa-a593-08839096e8e7 858 0 2025-11-04 04:58:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77f5f6cfbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77f5f6cfbf-gqz2h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad9b71fb9ea [] [] }} ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.176 [INFO][4681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.203 [INFO][4695] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" HandleID="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Workload="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.203 [INFO][4695] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" HandleID="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Workload="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77f5f6cfbf-gqz2h", "timestamp":"2025-11-04 04:59:03.203114572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.203 [INFO][4695] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.203 [INFO][4695] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.203 [INFO][4695] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.210 [INFO][4695] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.215 [INFO][4695] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.220 [INFO][4695] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.223 [INFO][4695] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.226 [INFO][4695] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.226 [INFO][4695] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.228 [INFO][4695] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42 Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.232 [INFO][4695] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.240 [INFO][4695] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.240 [INFO][4695] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" host="localhost" Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.240 [INFO][4695] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:03.267804 containerd[1611]: 2025-11-04 04:59:03.240 [INFO][4695] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" HandleID="k8s-pod-network.a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Workload="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" Nov 4 04:59:03.268754 containerd[1611]: 2025-11-04 04:59:03.246 [INFO][4681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0", GenerateName:"calico-apiserver-77f5f6cfbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ad3df36-c874-47aa-a593-08839096e8e7", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f5f6cfbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77f5f6cfbf-gqz2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad9b71fb9ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:03.268754 containerd[1611]: 2025-11-04 04:59:03.246 [INFO][4681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" Nov 4 04:59:03.268754 containerd[1611]: 2025-11-04 04:59:03.246 [INFO][4681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad9b71fb9ea ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" Nov 4 04:59:03.268754 containerd[1611]: 2025-11-04 04:59:03.251 [INFO][4681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" Nov 4 04:59:03.268754 containerd[1611]: 2025-11-04 04:59:03.251 [INFO][4681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0", GenerateName:"calico-apiserver-77f5f6cfbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ad3df36-c874-47aa-a593-08839096e8e7", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f5f6cfbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42", Pod:"calico-apiserver-77f5f6cfbf-gqz2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad9b71fb9ea", MAC:"a2:b6:1e:f6:61:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:03.268754 containerd[1611]: 2025-11-04 04:59:03.262 [INFO][4681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-gqz2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--gqz2h-eth0" Nov 4 04:59:03.297888 containerd[1611]: time="2025-11-04T04:59:03.297839346Z" level=info msg="connecting to shim a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42" address="unix:///run/containerd/s/bdcfeb40cbb83c18602d2103dff29c88ebab86225b703c49bae2bbe3b02a5246" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:03.330889 systemd[1]: Started cri-containerd-a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42.scope - libcontainer container a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42. Nov 4 04:59:03.348072 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:03.383831 containerd[1611]: time="2025-11-04T04:59:03.383788132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-gqz2h,Uid:5ad3df36-c874-47aa-a593-08839096e8e7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a4ae01fdfa28cdfb41ea18fcf999d5c30435557ca7224fbfaba64c385a94fb42\"" Nov 4 04:59:03.405186 kubelet[2838]: E1104 04:59:03.405150 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:03.575715 containerd[1611]: time="2025-11-04T04:59:03.575480870Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:03.577216 containerd[1611]: time="2025-11-04T04:59:03.577163705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:59:03.577378 containerd[1611]: time="2025-11-04T04:59:03.577258425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:03.577575 kubelet[2838]: E1104 04:59:03.577496 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:03.577665 kubelet[2838]: E1104 04:59:03.577587 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:03.578164 kubelet[2838]: E1104 04:59:03.577893 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bskz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:03.578308 containerd[1611]: time="2025-11-04T04:59:03.577998033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:03.579277 kubelet[2838]: E1104 04:59:03.579233 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:59:03.904539 containerd[1611]: time="2025-11-04T04:59:03.904461303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:03.905798 containerd[1611]: time="2025-11-04T04:59:03.905762601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:03.905798 containerd[1611]: time="2025-11-04T04:59:03.905788229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:03.906078 kubelet[2838]: E1104 04:59:03.906012 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:03.906078 kubelet[2838]: E1104 04:59:03.906075 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:03.906286 kubelet[2838]: E1104 04:59:03.906240 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6677,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77f5f6cfbf-gqz2h_calico-apiserver(5ad3df36-c874-47aa-a593-08839096e8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:03.907676 kubelet[2838]: E1104 04:59:03.907641 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 04:59:04.131982 containerd[1611]: time="2025-11-04T04:59:04.131897346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-t46fb,Uid:52545a29-818f-419d-b4f9-3a5f212c18e5,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:59:04.132969 kubelet[2838]: E1104 04:59:04.132930 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:04.133263 containerd[1611]: time="2025-11-04T04:59:04.133227548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tdwlc,Uid:b662e523-ecd3-47d8-8489-153b58632cbe,Namespace:kube-system,Attempt:0,}" Nov 4 04:59:04.292184 systemd-networkd[1516]: cali2239e494fcb: Link UP Nov 4 04:59:04.294164 systemd-networkd[1516]: cali2239e494fcb: Gained carrier Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.209 [INFO][4765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0 calico-apiserver-77f5f6cfbf- calico-apiserver 52545a29-818f-419d-b4f9-3a5f212c18e5 867 0 2025-11-04 04:58:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77f5f6cfbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77f5f6cfbf-t46fb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2239e494fcb [] [] }} ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.209 [INFO][4765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.250 [INFO][4792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" HandleID="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Workload="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.250 [INFO][4792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" HandleID="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Workload="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77f5f6cfbf-t46fb", "timestamp":"2025-11-04 04:59:04.250296199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.250 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.250 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.250 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.259 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.265 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.269 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.271 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.273 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.273 [INFO][4792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.275 [INFO][4792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6 Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.279 [INFO][4792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.285 [INFO][4792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.285 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" host="localhost" Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.285 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:04.308586 containerd[1611]: 2025-11-04 04:59:04.285 [INFO][4792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" HandleID="k8s-pod-network.6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Workload="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" Nov 4 04:59:04.309713 containerd[1611]: 2025-11-04 04:59:04.288 [INFO][4765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0", GenerateName:"calico-apiserver-77f5f6cfbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"52545a29-818f-419d-b4f9-3a5f212c18e5", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f5f6cfbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77f5f6cfbf-t46fb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2239e494fcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:04.309713 containerd[1611]: 2025-11-04 04:59:04.288 [INFO][4765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" Nov 4 04:59:04.309713 containerd[1611]: 2025-11-04 04:59:04.288 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2239e494fcb ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" Nov 4 04:59:04.309713 containerd[1611]: 2025-11-04 04:59:04.292 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" Nov 4 04:59:04.309713 containerd[1611]: 2025-11-04 04:59:04.293 [INFO][4765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0", GenerateName:"calico-apiserver-77f5f6cfbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"52545a29-818f-419d-b4f9-3a5f212c18e5", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f5f6cfbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6", Pod:"calico-apiserver-77f5f6cfbf-t46fb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2239e494fcb", MAC:"8a:1b:c8:04:24:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:04.309713 containerd[1611]: 2025-11-04 04:59:04.304 [INFO][4765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" Namespace="calico-apiserver" Pod="calico-apiserver-77f5f6cfbf-t46fb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77f5f6cfbf--t46fb-eth0" Nov 4 04:59:04.332477 containerd[1611]: time="2025-11-04T04:59:04.331858260Z" level=info msg="connecting to shim 6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6" address="unix:///run/containerd/s/eebae9c79d5046b83df60561bd9cb535fe33f7d757ca06c3d3e44b6072d4f6c3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:04.371894 systemd[1]: Started cri-containerd-6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6.scope - libcontainer container 6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6. Nov 4 04:59:04.380827 systemd-networkd[1516]: cali3dc986bf6d8: Gained IPv6LL Nov 4 04:59:04.394490 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:04.404369 systemd-networkd[1516]: cali8ec5ce45c16: Link UP Nov 4 04:59:04.405235 systemd-networkd[1516]: cali8ec5ce45c16: Gained carrier Nov 4 04:59:04.411201 kubelet[2838]: E1104 04:59:04.411075 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:04.412895 kubelet[2838]: E1104 04:59:04.412831 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 04:59:04.413092 kubelet[2838]: E1104 04:59:04.412765 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.207 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0 coredns-674b8bbfcf- kube-system b662e523-ecd3-47d8-8489-153b58632cbe 855 0 2025-11-04 04:58:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-tdwlc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ec5ce45c16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.207 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.257 [INFO][4790] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" HandleID="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Workload="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.257 [INFO][4790] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" HandleID="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Workload="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fec0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-tdwlc", "timestamp":"2025-11-04 04:59:04.257458759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.258 [INFO][4790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.286 [INFO][4790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.286 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.361 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.368 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.372 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.374 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.376 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.376 [INFO][4790] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.378 [INFO][4790] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9 Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.388 [INFO][4790] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.395 [INFO][4790] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.395 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" host="localhost" Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.395 [INFO][4790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:04.427503 containerd[1611]: 2025-11-04 04:59:04.395 [INFO][4790] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" HandleID="k8s-pod-network.aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Workload="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" Nov 4 04:59:04.428081 containerd[1611]: 2025-11-04 04:59:04.399 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b662e523-ecd3-47d8-8489-153b58632cbe", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-tdwlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ec5ce45c16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:04.428081 containerd[1611]: 2025-11-04 04:59:04.400 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" Nov 4 04:59:04.428081 containerd[1611]: 2025-11-04 04:59:04.400 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ec5ce45c16 ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" Nov 4 04:59:04.428081 containerd[1611]: 2025-11-04 04:59:04.406 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" Nov 4 04:59:04.428081 containerd[1611]: 2025-11-04 04:59:04.406 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b662e523-ecd3-47d8-8489-153b58632cbe", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9", Pod:"coredns-674b8bbfcf-tdwlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ec5ce45c16", MAC:"4e:12:76:64:07:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:04.428081 containerd[1611]: 2025-11-04 04:59:04.422 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-tdwlc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tdwlc-eth0" Nov 4 04:59:04.453855 containerd[1611]: time="2025-11-04T04:59:04.453781269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f5f6cfbf-t46fb,Uid:52545a29-818f-419d-b4f9-3a5f212c18e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6be51ebb74e8e148f7e6322f3bc3c3d82c4a085bc32340355b6a9ef4a7b8a3f6\"" Nov 4 04:59:04.457573 containerd[1611]: time="2025-11-04T04:59:04.457517399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:04.479643 containerd[1611]: time="2025-11-04T04:59:04.477882804Z" level=info msg="connecting to shim aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9" address="unix:///run/containerd/s/5b1bc7cf4576f9fa9a9fe95ec5ef1295856ca049e361aa06c086fd0fe20fe361" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:04.520831 systemd[1]: Started cri-containerd-aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9.scope - libcontainer container aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9. Nov 4 04:59:04.538045 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:59:04.571687 containerd[1611]: time="2025-11-04T04:59:04.571518983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tdwlc,Uid:b662e523-ecd3-47d8-8489-153b58632cbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9\"" Nov 4 04:59:04.574138 kubelet[2838]: E1104 04:59:04.574106 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:04.579173 containerd[1611]: time="2025-11-04T04:59:04.579109928Z" level=info msg="CreateContainer within sandbox \"aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:59:04.592764 containerd[1611]: time="2025-11-04T04:59:04.592697325Z" level=info msg="Container 973a6fc4a51f50f810b22350a8b21f2a1060b5970284c568d4fd04d0ac9b84d0: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:59:04.599775 containerd[1611]: time="2025-11-04T04:59:04.599731190Z" level=info msg="CreateContainer within sandbox \"aa2ee6c2468bfc59ead72b2ad2193f1966a0d81dec51e8b3652ec2b0e4aed3b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"973a6fc4a51f50f810b22350a8b21f2a1060b5970284c568d4fd04d0ac9b84d0\"" Nov 4 04:59:04.600392 containerd[1611]: time="2025-11-04T04:59:04.600240649Z" level=info msg="StartContainer for \"973a6fc4a51f50f810b22350a8b21f2a1060b5970284c568d4fd04d0ac9b84d0\"" Nov 4 04:59:04.601465 containerd[1611]: time="2025-11-04T04:59:04.601407821Z" level=info msg="connecting to shim 973a6fc4a51f50f810b22350a8b21f2a1060b5970284c568d4fd04d0ac9b84d0" address="unix:///run/containerd/s/5b1bc7cf4576f9fa9a9fe95ec5ef1295856ca049e361aa06c086fd0fe20fe361" protocol=ttrpc version=3 Nov 4 04:59:04.623824 systemd[1]: Started cri-containerd-973a6fc4a51f50f810b22350a8b21f2a1060b5970284c568d4fd04d0ac9b84d0.scope - libcontainer container 973a6fc4a51f50f810b22350a8b21f2a1060b5970284c568d4fd04d0ac9b84d0. Nov 4 04:59:04.659038 containerd[1611]: time="2025-11-04T04:59:04.658989628Z" level=info msg="StartContainer for \"973a6fc4a51f50f810b22350a8b21f2a1060b5970284c568d4fd04d0ac9b84d0\" returns successfully" Nov 4 04:59:04.801813 containerd[1611]: time="2025-11-04T04:59:04.801745461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:04.804095 containerd[1611]: time="2025-11-04T04:59:04.804020140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:04.804095 containerd[1611]: time="2025-11-04T04:59:04.804085615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:04.804360 kubelet[2838]: E1104 04:59:04.804310 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:04.804427 kubelet[2838]: E1104 04:59:04.804369 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:04.804593 kubelet[2838]: E1104 04:59:04.804527 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkf78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77f5f6cfbf-t46fb_calico-apiserver(52545a29-818f-419d-b4f9-3a5f212c18e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:04.805757 kubelet[2838]: E1104 04:59:04.805726 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:59:04.892976 systemd-networkd[1516]: caliad9b71fb9ea: Gained IPv6LL Nov 4 04:59:05.414257 kubelet[2838]: E1104 04:59:05.414203 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:05.416321 kubelet[2838]: E1104 04:59:05.416271 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:59:05.433360 kubelet[2838]: I1104 04:59:05.433239 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tdwlc" podStartSLOduration=47.433209014 podStartE2EDuration="47.433209014s" podCreationTimestamp="2025-11-04 04:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:59:05.431743046 +0000 UTC m=+54.415275193" watchObservedRunningTime="2025-11-04 04:59:05.433209014 +0000 UTC m=+54.416741162" Nov 4 04:59:05.468870 systemd-networkd[1516]: cali2239e494fcb: Gained IPv6LL Nov 4 04:59:05.596897 systemd-networkd[1516]: cali8ec5ce45c16: Gained IPv6LL Nov 4 04:59:06.418200 kubelet[2838]: E1104 04:59:06.417910 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:06.419826 kubelet[2838]: E1104 04:59:06.418708 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:59:07.420082 kubelet[2838]: E1104 04:59:07.420043 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:08.116905 systemd[1]: Started sshd@9-10.0.0.56:22-10.0.0.1:48446.service - OpenSSH per-connection server daemon (10.0.0.1:48446). Nov 4 04:59:08.204357 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 48446 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:08.206816 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:08.213340 systemd-logind[1587]: New session 10 of user core. Nov 4 04:59:08.220800 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 04:59:08.338410 sshd[4975]: Connection closed by 10.0.0.1 port 48446 Nov 4 04:59:08.338770 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:08.344348 systemd[1]: sshd@9-10.0.0.56:22-10.0.0.1:48446.service: Deactivated successfully. Nov 4 04:59:08.347001 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 04:59:08.348148 systemd-logind[1587]: Session 10 logged out. Waiting for processes to exit. Nov 4 04:59:08.350048 systemd-logind[1587]: Removed session 10. Nov 4 04:59:13.133763 containerd[1611]: time="2025-11-04T04:59:13.133708564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:59:13.351966 systemd[1]: Started sshd@10-10.0.0.56:22-10.0.0.1:38982.service - OpenSSH per-connection server daemon (10.0.0.1:38982). Nov 4 04:59:13.417735 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 38982 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:13.419605 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:13.425490 systemd-logind[1587]: New session 11 of user core. Nov 4 04:59:13.436822 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 04:59:13.466568 containerd[1611]: time="2025-11-04T04:59:13.466490452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:13.467820 containerd[1611]: time="2025-11-04T04:59:13.467781781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:59:13.467888 containerd[1611]: time="2025-11-04T04:59:13.467868686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:13.468831 kubelet[2838]: E1104 04:59:13.468782 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:13.469192 kubelet[2838]: E1104 04:59:13.468846 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:13.470563 kubelet[2838]: E1104 04:59:13.470509 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:406f63172baa4248bd0f136be65aaf62,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5gkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d9bd49888-82v9k_calico-system(60e4d4ba-e12d-4bb4-b4da-ea310004d8fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:13.473725 containerd[1611]: time="2025-11-04T04:59:13.472581997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:59:13.518795 sshd[5004]: Connection closed by 10.0.0.1 port 38982 Nov 4 04:59:13.519147 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:13.524448 systemd[1]: sshd@10-10.0.0.56:22-10.0.0.1:38982.service: Deactivated successfully. Nov 4 04:59:13.526642 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 04:59:13.528454 systemd-logind[1587]: Session 11 logged out. Waiting for processes to exit. Nov 4 04:59:13.530191 systemd-logind[1587]: Removed session 11. Nov 4 04:59:13.825425 containerd[1611]: time="2025-11-04T04:59:13.825271318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:14.120100 containerd[1611]: time="2025-11-04T04:59:14.120016532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:14.120100 containerd[1611]: time="2025-11-04T04:59:14.120057049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:59:14.120417 kubelet[2838]: E1104 04:59:14.120354 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:14.120504 kubelet[2838]: E1104 04:59:14.120422 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:14.120657 kubelet[2838]: E1104 04:59:14.120582 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d9bd49888-82v9k_calico-system(60e4d4ba-e12d-4bb4-b4da-ea310004d8fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:14.121938 kubelet[2838]: E1104 04:59:14.121895 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d9bd49888-82v9k" podUID="60e4d4ba-e12d-4bb4-b4da-ea310004d8fa" Nov 4 04:59:14.132861 containerd[1611]: time="2025-11-04T04:59:14.132809864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:59:14.556050 containerd[1611]: time="2025-11-04T04:59:14.555884206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:14.557130 containerd[1611]: time="2025-11-04T04:59:14.557091756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:59:14.557209 containerd[1611]: time="2025-11-04T04:59:14.557165285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:14.557386 kubelet[2838]: E1104 04:59:14.557340 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:14.557816 kubelet[2838]: E1104 04:59:14.557401 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:14.557816 kubelet[2838]: E1104 04:59:14.557577 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r89mn_calico-system(d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:14.559170 kubelet[2838]: E1104 04:59:14.559122 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 04:59:15.133438 containerd[1611]: time="2025-11-04T04:59:15.133007981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:59:15.466152 containerd[1611]: time="2025-11-04T04:59:15.465973515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:15.467490 containerd[1611]: time="2025-11-04T04:59:15.467395281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:59:15.467490 containerd[1611]: time="2025-11-04T04:59:15.467452048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:15.467797 kubelet[2838]: E1104 04:59:15.467749 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:15.467862 kubelet[2838]: E1104 04:59:15.467808 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:15.468038 kubelet[2838]: E1104 04:59:15.467976 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2stn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86b5f8584f-qczbm_calico-system(1c74c150-52d9-4d9d-b4fe-59734b73de89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:15.469239 kubelet[2838]: E1104 04:59:15.469195 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 04:59:17.133631 containerd[1611]: time="2025-11-04T04:59:17.133542053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:59:17.607382 containerd[1611]: time="2025-11-04T04:59:17.607303544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:17.608921 containerd[1611]: time="2025-11-04T04:59:17.608839645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:59:17.609013 containerd[1611]: time="2025-11-04T04:59:17.608965583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:17.609249 kubelet[2838]: E1104 04:59:17.609180 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:17.609741 kubelet[2838]: E1104 04:59:17.609262 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:17.609741 kubelet[2838]: E1104 04:59:17.609444 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bskz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:17.611835 containerd[1611]: time="2025-11-04T04:59:17.611782230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:59:17.967672 containerd[1611]: time="2025-11-04T04:59:17.967399597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:18.072598 containerd[1611]: time="2025-11-04T04:59:18.072435589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:18.072833 containerd[1611]: time="2025-11-04T04:59:18.072566126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:59:18.073022 kubelet[2838]: E1104 04:59:18.072970 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:18.073091 kubelet[2838]: E1104 04:59:18.073032 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:18.073251 kubelet[2838]: E1104 04:59:18.073183 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bskz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:18.074500 kubelet[2838]: E1104 04:59:18.074411 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:59:18.133363 containerd[1611]: time="2025-11-04T04:59:18.133304254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:18.434879 containerd[1611]: time="2025-11-04T04:59:18.434823524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:18.436140 containerd[1611]: time="2025-11-04T04:59:18.436095825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:18.436140 containerd[1611]: time="2025-11-04T04:59:18.436133336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:18.436385 kubelet[2838]: E1104 04:59:18.436341 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:18.436441 kubelet[2838]: E1104 04:59:18.436400 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:18.436588 kubelet[2838]: E1104 04:59:18.436550 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkf78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77f5f6cfbf-t46fb_calico-apiserver(52545a29-818f-419d-b4f9-3a5f212c18e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:18.437692 kubelet[2838]: E1104 04:59:18.437662 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:59:18.537526 systemd[1]: Started sshd@11-10.0.0.56:22-10.0.0.1:38996.service - OpenSSH per-connection server daemon (10.0.0.1:38996). Nov 4 04:59:18.606325 sshd[5020]: Accepted publickey for core from 10.0.0.1 port 38996 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:18.607918 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:18.612347 systemd-logind[1587]: New session 12 of user core. Nov 4 04:59:18.624798 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 04:59:18.736537 sshd[5023]: Connection closed by 10.0.0.1 port 38996 Nov 4 04:59:18.736888 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:18.744626 systemd[1]: sshd@11-10.0.0.56:22-10.0.0.1:38996.service: Deactivated successfully. Nov 4 04:59:18.747678 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 04:59:18.749755 systemd-logind[1587]: Session 12 logged out. Waiting for processes to exit. Nov 4 04:59:18.752260 systemd-logind[1587]: Removed session 12. Nov 4 04:59:20.132591 containerd[1611]: time="2025-11-04T04:59:20.132499953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:20.676033 containerd[1611]: time="2025-11-04T04:59:20.675937723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:20.677492 containerd[1611]: time="2025-11-04T04:59:20.677393941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:20.677745 containerd[1611]: time="2025-11-04T04:59:20.677515531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:20.677823 kubelet[2838]: E1104 04:59:20.677739 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:20.677823 kubelet[2838]: E1104 04:59:20.677808 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:20.678327 kubelet[2838]: E1104 04:59:20.678025 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6677,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77f5f6cfbf-gqz2h_calico-apiserver(5ad3df36-c874-47aa-a593-08839096e8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:20.679244 kubelet[2838]: E1104 04:59:20.679181 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 04:59:23.756887 systemd[1]: Started sshd@12-10.0.0.56:22-10.0.0.1:44436.service - OpenSSH per-connection server daemon (10.0.0.1:44436). Nov 4 04:59:23.811092 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 44436 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:23.812704 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:23.817427 systemd-logind[1587]: New session 13 of user core. Nov 4 04:59:23.823754 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 04:59:23.919179 sshd[5048]: Connection closed by 10.0.0.1 port 44436 Nov 4 04:59:23.919726 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:23.929552 systemd[1]: sshd@12-10.0.0.56:22-10.0.0.1:44436.service: Deactivated successfully. Nov 4 04:59:23.931887 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 04:59:23.932827 systemd-logind[1587]: Session 13 logged out. Waiting for processes to exit. Nov 4 04:59:23.937031 systemd[1]: Started sshd@13-10.0.0.56:22-10.0.0.1:44446.service - OpenSSH per-connection server daemon (10.0.0.1:44446). Nov 4 04:59:23.937732 systemd-logind[1587]: Removed session 13. Nov 4 04:59:24.007354 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 44446 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:24.008798 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:24.014069 systemd-logind[1587]: New session 14 of user core. Nov 4 04:59:24.021782 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 04:59:24.369633 sshd[5065]: Connection closed by 10.0.0.1 port 44446 Nov 4 04:59:24.370080 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:24.385737 systemd[1]: sshd@13-10.0.0.56:22-10.0.0.1:44446.service: Deactivated successfully. Nov 4 04:59:24.389000 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 04:59:24.391637 systemd-logind[1587]: Session 14 logged out. Waiting for processes to exit. Nov 4 04:59:24.399375 systemd[1]: Started sshd@14-10.0.0.56:22-10.0.0.1:44454.service - OpenSSH per-connection server daemon (10.0.0.1:44454). Nov 4 04:59:24.401230 systemd-logind[1587]: Removed session 14. Nov 4 04:59:24.479998 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 44454 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:24.481805 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:24.487265 systemd-logind[1587]: New session 15 of user core. Nov 4 04:59:24.492770 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 04:59:24.579285 sshd[5081]: Connection closed by 10.0.0.1 port 44454 Nov 4 04:59:24.579795 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:24.585389 systemd[1]: sshd@14-10.0.0.56:22-10.0.0.1:44454.service: Deactivated successfully. Nov 4 04:59:24.587734 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 04:59:24.588601 systemd-logind[1587]: Session 15 logged out. Waiting for processes to exit. Nov 4 04:59:24.590136 systemd-logind[1587]: Removed session 15. Nov 4 04:59:28.133579 kubelet[2838]: E1104 04:59:28.133510 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d9bd49888-82v9k" podUID="60e4d4ba-e12d-4bb4-b4da-ea310004d8fa" Nov 4 04:59:28.137439 kubelet[2838]: E1104 04:59:28.137360 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:29.133012 kubelet[2838]: E1104 04:59:29.132947 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 04:59:29.133999 kubelet[2838]: E1104 04:59:29.133952 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:59:29.133999 kubelet[2838]: E1104 04:59:29.133964 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:59:29.134396 kubelet[2838]: E1104 04:59:29.134188 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 04:59:29.598303 systemd[1]: Started sshd@15-10.0.0.56:22-10.0.0.1:44470.service - OpenSSH per-connection server daemon (10.0.0.1:44470). Nov 4 04:59:29.673366 sshd[5099]: Accepted publickey for core from 10.0.0.1 port 44470 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:29.677902 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:29.684394 systemd-logind[1587]: New session 16 of user core. Nov 4 04:59:29.694878 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 04:59:29.828107 sshd[5102]: Connection closed by 10.0.0.1 port 44470 Nov 4 04:59:29.828520 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:29.835579 systemd-logind[1587]: Session 16 logged out. Waiting for processes to exit. Nov 4 04:59:29.836201 systemd[1]: sshd@15-10.0.0.56:22-10.0.0.1:44470.service: Deactivated successfully. Nov 4 04:59:29.839344 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 04:59:29.842755 systemd-logind[1587]: Removed session 16. Nov 4 04:59:32.132865 kubelet[2838]: E1104 04:59:32.132787 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:34.133314 kubelet[2838]: E1104 04:59:34.133249 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 04:59:34.844977 systemd[1]: Started sshd@16-10.0.0.56:22-10.0.0.1:53910.service - OpenSSH per-connection server daemon (10.0.0.1:53910). Nov 4 04:59:34.926736 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 53910 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:34.928854 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:34.935704 systemd-logind[1587]: New session 17 of user core. Nov 4 04:59:34.944962 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 04:59:35.059346 sshd[5149]: Connection closed by 10.0.0.1 port 53910 Nov 4 04:59:35.059711 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:35.064338 systemd[1]: sshd@16-10.0.0.56:22-10.0.0.1:53910.service: Deactivated successfully. Nov 4 04:59:35.066574 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 04:59:35.067830 systemd-logind[1587]: Session 17 logged out. Waiting for processes to exit. Nov 4 04:59:35.069125 systemd-logind[1587]: Removed session 17. Nov 4 04:59:40.076415 systemd[1]: Started sshd@17-10.0.0.56:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Nov 4 04:59:40.137675 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:40.139663 containerd[1611]: time="2025-11-04T04:59:40.139253153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:59:40.139861 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:40.147016 systemd-logind[1587]: New session 18 of user core. Nov 4 04:59:40.154980 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 04:59:40.477532 sshd[5167]: Connection closed by 10.0.0.1 port 53926 Nov 4 04:59:40.478869 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:40.484290 systemd[1]: sshd@17-10.0.0.56:22-10.0.0.1:53926.service: Deactivated successfully. Nov 4 04:59:40.486634 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 04:59:40.487695 systemd-logind[1587]: Session 18 logged out. Waiting for processes to exit. Nov 4 04:59:40.489287 systemd-logind[1587]: Removed session 18. Nov 4 04:59:40.510534 containerd[1611]: time="2025-11-04T04:59:40.510484267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:40.559857 containerd[1611]: time="2025-11-04T04:59:40.552940704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:40.559857 containerd[1611]: time="2025-11-04T04:59:40.552976101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:59:40.560157 kubelet[2838]: E1104 04:59:40.560095 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:40.560535 kubelet[2838]: E1104 04:59:40.560162 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:40.605685 kubelet[2838]: E1104 04:59:40.605587 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:406f63172baa4248bd0f136be65aaf62,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5gkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d9bd49888-82v9k_calico-system(60e4d4ba-e12d-4bb4-b4da-ea310004d8fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:40.612687 containerd[1611]: time="2025-11-04T04:59:40.612435155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:59:41.026161 containerd[1611]: time="2025-11-04T04:59:41.026078096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:41.047438 containerd[1611]: time="2025-11-04T04:59:41.047365651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:59:41.047580 containerd[1611]: time="2025-11-04T04:59:41.047442035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:41.047747 kubelet[2838]: E1104 04:59:41.047681 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:41.047811 kubelet[2838]: E1104 04:59:41.047758 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:41.047987 kubelet[2838]: E1104 04:59:41.047936 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gkbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d9bd49888-82v9k_calico-system(60e4d4ba-e12d-4bb4-b4da-ea310004d8fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:41.049159 kubelet[2838]: E1104 04:59:41.049109 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d9bd49888-82v9k" podUID="60e4d4ba-e12d-4bb4-b4da-ea310004d8fa" Nov 4 04:59:41.133135 containerd[1611]: time="2025-11-04T04:59:41.133045903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:59:41.560395 containerd[1611]: time="2025-11-04T04:59:41.560337055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:41.593026 containerd[1611]: time="2025-11-04T04:59:41.592939584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:41.593026 containerd[1611]: time="2025-11-04T04:59:41.592978087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:59:41.593344 kubelet[2838]: E1104 04:59:41.593233 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:41.593344 kubelet[2838]: E1104 04:59:41.593302 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:41.593915 kubelet[2838]: E1104 04:59:41.593576 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2stn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86b5f8584f-qczbm_calico-system(1c74c150-52d9-4d9d-b4fe-59734b73de89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:41.594135 containerd[1611]: time="2025-11-04T04:59:41.594103880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:59:41.595574 kubelet[2838]: E1104 04:59:41.595514 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 04:59:41.917850 containerd[1611]: time="2025-11-04T04:59:41.917772997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:41.919214 containerd[1611]: time="2025-11-04T04:59:41.919136059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:59:41.919357 containerd[1611]: time="2025-11-04T04:59:41.919258449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:41.919519 kubelet[2838]: E1104 04:59:41.919457 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:41.919593 kubelet[2838]: E1104 04:59:41.919525 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:41.919864 kubelet[2838]: E1104 04:59:41.919779 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r89mn_calico-system(d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:41.921131 kubelet[2838]: E1104 04:59:41.921056 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 04:59:42.132846 containerd[1611]: time="2025-11-04T04:59:42.132799568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:42.752877 containerd[1611]: time="2025-11-04T04:59:42.752813145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:42.820401 containerd[1611]: time="2025-11-04T04:59:42.820285625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:42.820401 containerd[1611]: time="2025-11-04T04:59:42.820351829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:42.820763 kubelet[2838]: E1104 04:59:42.820700 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:42.821224 kubelet[2838]: E1104 04:59:42.820777 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:42.821224 kubelet[2838]: E1104 04:59:42.820968 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkf78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77f5f6cfbf-t46fb_calico-apiserver(52545a29-818f-419d-b4f9-3a5f212c18e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:42.822383 kubelet[2838]: E1104 04:59:42.822324 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:59:43.133500 containerd[1611]: time="2025-11-04T04:59:43.133447890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:59:43.474782 containerd[1611]: time="2025-11-04T04:59:43.474563150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:43.478146 containerd[1611]: time="2025-11-04T04:59:43.478072508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:59:43.478247 containerd[1611]: time="2025-11-04T04:59:43.478140186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:43.478406 kubelet[2838]: E1104 04:59:43.478348 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:43.478463 kubelet[2838]: E1104 04:59:43.478416 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:43.478680 kubelet[2838]: E1104 04:59:43.478594 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bskz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:43.480597 containerd[1611]: time="2025-11-04T04:59:43.480548708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:59:43.836071 containerd[1611]: time="2025-11-04T04:59:43.835913931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:43.872788 containerd[1611]: time="2025-11-04T04:59:43.872680068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:59:43.872788 containerd[1611]: time="2025-11-04T04:59:43.872739800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:43.872997 kubelet[2838]: E1104 04:59:43.872945 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:43.873507 kubelet[2838]: E1104 04:59:43.873004 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:43.873507 kubelet[2838]: E1104 04:59:43.873169 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bskz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnpcs_calico-system(d1afccb9-55ee-4f50-a636-3c55f302f219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:43.874535 kubelet[2838]: E1104 04:59:43.874368 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:59:45.131943 kubelet[2838]: E1104 04:59:45.131895 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:45.495491 systemd[1]: Started sshd@18-10.0.0.56:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Nov 4 04:59:45.555335 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:45.557255 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:45.563006 systemd-logind[1587]: New session 19 of user core. Nov 4 04:59:45.577865 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 04:59:45.745358 sshd[5191]: Connection closed by 10.0.0.1 port 37294 Nov 4 04:59:45.745771 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:45.757658 systemd[1]: sshd@18-10.0.0.56:22-10.0.0.1:37294.service: Deactivated successfully. Nov 4 04:59:45.759850 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 04:59:45.761028 systemd-logind[1587]: Session 19 logged out. Waiting for processes to exit. Nov 4 04:59:45.765068 systemd[1]: Started sshd@19-10.0.0.56:22-10.0.0.1:37310.service - OpenSSH per-connection server daemon (10.0.0.1:37310). Nov 4 04:59:45.765942 systemd-logind[1587]: Removed session 19. Nov 4 04:59:45.822538 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 37310 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:45.824404 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:45.829290 systemd-logind[1587]: New session 20 of user core. Nov 4 04:59:45.838806 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 04:59:46.467819 sshd[5207]: Connection closed by 10.0.0.1 port 37310 Nov 4 04:59:46.468230 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:46.483651 systemd[1]: sshd@19-10.0.0.56:22-10.0.0.1:37310.service: Deactivated successfully. Nov 4 04:59:46.485779 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 04:59:46.486702 systemd-logind[1587]: Session 20 logged out. Waiting for processes to exit. Nov 4 04:59:46.489770 systemd[1]: Started sshd@20-10.0.0.56:22-10.0.0.1:37324.service - OpenSSH per-connection server daemon (10.0.0.1:37324). Nov 4 04:59:46.490737 systemd-logind[1587]: Removed session 20. Nov 4 04:59:46.561883 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 37324 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:46.563680 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:46.568562 systemd-logind[1587]: New session 21 of user core. Nov 4 04:59:46.577745 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 04:59:47.133516 containerd[1611]: time="2025-11-04T04:59:47.133468587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:47.780964 containerd[1611]: time="2025-11-04T04:59:47.780891705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:47.964551 containerd[1611]: time="2025-11-04T04:59:47.964438999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:47.965575 containerd[1611]: time="2025-11-04T04:59:47.964506597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:47.967352 kubelet[2838]: E1104 04:59:47.967248 2838 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:47.967352 kubelet[2838]: E1104 04:59:47.967331 2838 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:47.968069 kubelet[2838]: E1104 04:59:47.967541 2838 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6677,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77f5f6cfbf-gqz2h_calico-apiserver(5ad3df36-c874-47aa-a593-08839096e8e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:47.968902 kubelet[2838]: E1104 04:59:47.968842 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 04:59:52.795197 sshd[5221]: Connection closed by 10.0.0.1 port 37324 Nov 4 04:59:52.795693 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:52.807691 systemd[1]: sshd@20-10.0.0.56:22-10.0.0.1:37324.service: Deactivated successfully. Nov 4 04:59:52.809839 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 04:59:52.810728 systemd-logind[1587]: Session 21 logged out. Waiting for processes to exit. Nov 4 04:59:52.814822 systemd[1]: Started sshd@21-10.0.0.56:22-10.0.0.1:37326.service - OpenSSH per-connection server daemon (10.0.0.1:37326). Nov 4 04:59:52.816452 systemd-logind[1587]: Removed session 21. Nov 4 04:59:52.866881 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 37326 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:52.868435 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:52.872777 systemd-logind[1587]: New session 22 of user core. Nov 4 04:59:52.887776 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 04:59:53.137831 kubelet[2838]: E1104 04:59:53.137750 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d9bd49888-82v9k" podUID="60e4d4ba-e12d-4bb4-b4da-ea310004d8fa" Nov 4 04:59:53.152105 sshd[5244]: Connection closed by 10.0.0.1 port 37326 Nov 4 04:59:53.154043 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:53.168872 systemd[1]: sshd@21-10.0.0.56:22-10.0.0.1:37326.service: Deactivated successfully. Nov 4 04:59:53.174079 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 04:59:53.177419 systemd-logind[1587]: Session 22 logged out. Waiting for processes to exit. Nov 4 04:59:53.181415 systemd-logind[1587]: Removed session 22. Nov 4 04:59:53.183374 systemd[1]: Started sshd@22-10.0.0.56:22-10.0.0.1:57258.service - OpenSSH per-connection server daemon (10.0.0.1:57258). Nov 4 04:59:53.244929 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 57258 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:53.247085 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:53.255254 systemd-logind[1587]: New session 23 of user core. Nov 4 04:59:53.268156 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 04:59:53.366750 sshd[5258]: Connection closed by 10.0.0.1 port 57258 Nov 4 04:59:53.367194 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:53.372235 systemd[1]: sshd@22-10.0.0.56:22-10.0.0.1:57258.service: Deactivated successfully. Nov 4 04:59:53.374773 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 04:59:53.376291 systemd-logind[1587]: Session 23 logged out. Waiting for processes to exit. Nov 4 04:59:53.377779 systemd-logind[1587]: Removed session 23. Nov 4 04:59:54.133089 kubelet[2838]: E1104 04:59:54.132987 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 04:59:55.133170 kubelet[2838]: E1104 04:59:55.133101 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 04:59:55.133736 kubelet[2838]: E1104 04:59:55.133226 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 04:59:55.133736 kubelet[2838]: E1104 04:59:55.133387 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 04:59:55.133736 kubelet[2838]: E1104 04:59:55.133447 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:59:58.391410 systemd[1]: Started sshd@23-10.0.0.56:22-10.0.0.1:57262.service - OpenSSH per-connection server daemon (10.0.0.1:57262). Nov 4 04:59:58.459337 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 57262 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 04:59:58.461553 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:58.467073 systemd-logind[1587]: New session 24 of user core. Nov 4 04:59:58.476760 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 04:59:58.568182 sshd[5274]: Connection closed by 10.0.0.1 port 57262 Nov 4 04:59:58.568667 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:58.573953 systemd[1]: sshd@23-10.0.0.56:22-10.0.0.1:57262.service: Deactivated successfully. Nov 4 04:59:58.576820 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 04:59:58.579583 systemd-logind[1587]: Session 24 logged out. Waiting for processes to exit. Nov 4 04:59:58.580862 systemd-logind[1587]: Removed session 24. Nov 4 05:00:00.135029 kubelet[2838]: E1104 05:00:00.134943 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 05:00:01.379595 kubelet[2838]: E1104 05:00:01.379554 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:00:03.596037 systemd[1]: Started sshd@24-10.0.0.56:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Nov 4 05:00:03.651487 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:00:03.653405 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:03.658627 systemd-logind[1587]: New session 25 of user core. Nov 4 05:00:03.669856 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 05:00:03.750264 sshd[5317]: Connection closed by 10.0.0.1 port 45580 Nov 4 05:00:03.750650 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:03.755919 systemd[1]: sshd@24-10.0.0.56:22-10.0.0.1:45580.service: Deactivated successfully. Nov 4 05:00:03.758212 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 05:00:03.759107 systemd-logind[1587]: Session 25 logged out. Waiting for processes to exit. Nov 4 05:00:03.760420 systemd-logind[1587]: Removed session 25. Nov 4 05:00:05.135030 kubelet[2838]: E1104 05:00:05.134680 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d9bd49888-82v9k" podUID="60e4d4ba-e12d-4bb4-b4da-ea310004d8fa" Nov 4 05:00:06.133752 kubelet[2838]: E1104 05:00:06.133689 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r89mn" podUID="d1d7ce6c-768a-492d-b1d6-e8d5ad10d6d6" Nov 4 05:00:07.137533 kubelet[2838]: E1104 05:00:07.137007 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-t46fb" podUID="52545a29-818f-419d-b4f9-3a5f212c18e5" Nov 4 05:00:08.133853 kubelet[2838]: E1104 05:00:08.133794 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnpcs" podUID="d1afccb9-55ee-4f50-a636-3c55f302f219" Nov 4 05:00:08.770746 systemd[1]: Started sshd@25-10.0.0.56:22-10.0.0.1:45590.service - OpenSSH per-connection server daemon (10.0.0.1:45590). Nov 4 05:00:08.833810 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 45590 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:00:08.837692 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:08.841914 systemd-logind[1587]: New session 26 of user core. Nov 4 05:00:08.850809 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 05:00:09.118091 sshd[5333]: Connection closed by 10.0.0.1 port 45590 Nov 4 05:00:09.118483 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:09.124580 systemd[1]: sshd@25-10.0.0.56:22-10.0.0.1:45590.service: Deactivated successfully. Nov 4 05:00:09.127752 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 05:00:09.131303 systemd-logind[1587]: Session 26 logged out. Waiting for processes to exit. Nov 4 05:00:09.132824 systemd-logind[1587]: Removed session 26. Nov 4 05:00:09.135532 kubelet[2838]: E1104 05:00:09.135035 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b5f8584f-qczbm" podUID="1c74c150-52d9-4d9d-b4fe-59734b73de89" Nov 4 05:00:10.132458 kubelet[2838]: E1104 05:00:10.132366 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:00:12.133323 kubelet[2838]: E1104 05:00:12.133197 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77f5f6cfbf-gqz2h" podUID="5ad3df36-c874-47aa-a593-08839096e8e7" Nov 4 05:00:14.139180 systemd[1]: Started sshd@26-10.0.0.56:22-10.0.0.1:51772.service - OpenSSH per-connection server daemon (10.0.0.1:51772). Nov 4 05:00:14.313307 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 51772 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:00:14.315755 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:14.321447 systemd-logind[1587]: New session 27 of user core. Nov 4 05:00:14.335807 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 4 05:00:14.424974 sshd[5351]: Connection closed by 10.0.0.1 port 51772 Nov 4 05:00:14.425885 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:14.431024 systemd[1]: sshd@26-10.0.0.56:22-10.0.0.1:51772.service: Deactivated successfully. Nov 4 05:00:14.433893 systemd[1]: session-27.scope: Deactivated successfully. Nov 4 05:00:14.435353 systemd-logind[1587]: Session 27 logged out. Waiting for processes to exit. Nov 4 05:00:14.438016 systemd-logind[1587]: Removed session 27.