Nov 6 00:28:56.974994 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:28:56.975021 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:28:56.975029 kernel: BIOS-provided physical RAM map: Nov 6 00:28:56.975036 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 6 00:28:56.975041 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 6 00:28:56.975049 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:28:56.975056 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 6 00:28:56.975062 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 6 00:28:56.975068 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 00:28:56.975073 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 00:28:56.975079 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:28:56.975085 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:28:56.975091 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 6 00:28:56.975097 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:28:56.975106 kernel: NX (Execute Disable) protection: active Nov 6 00:28:56.975113 kernel: APIC: Static calls initialized Nov 6 00:28:56.975119 kernel: SMBIOS 2.8 present. Nov 6 00:28:56.975125 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 6 00:28:56.975131 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:28:56.975138 kernel: Hypervisor detected: KVM Nov 6 00:28:56.975146 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 6 00:28:56.975152 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:28:56.975158 kernel: kvm-clock: using sched offset of 7481816772 cycles Nov 6 00:28:56.975165 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:28:56.975172 kernel: tsc: Detected 1999.999 MHz processor Nov 6 00:28:56.975178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:28:56.975185 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:28:56.975192 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 6 00:28:56.975198 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:28:56.975207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:28:56.975213 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 6 00:28:56.975220 kernel: Using GB pages for direct mapping Nov 6 00:28:56.975226 kernel: ACPI: Early table checksum verification disabled Nov 6 00:28:56.975232 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 6 00:28:56.975239 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:28:56.975245 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:28:56.975251 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:28:56.975258 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 6 00:28:56.975266 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:28:56.975273 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:28:56.975282 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:28:56.975289 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:28:56.975296 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 6 00:28:56.975303 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 6 00:28:56.975311 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 6 00:28:56.975318 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 6 00:28:56.975324 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 6 00:28:56.975331 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 6 00:28:56.975338 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 6 00:28:56.975344 kernel: No NUMA configuration found Nov 6 00:28:56.975351 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 6 00:28:56.975357 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Nov 6 00:28:56.975394 kernel: Zone ranges: Nov 6 00:28:56.975406 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:28:56.975413 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 6 00:28:56.975420 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 6 00:28:56.975428 kernel: Device empty Nov 6 00:28:56.975496 kernel: Movable zone start for each node Nov 6 00:28:56.975506 kernel: Early memory node ranges Nov 6 00:28:56.975514 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:28:56.975520 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 6 00:28:56.975527 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 6 00:28:56.975538 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 6 00:28:56.975545 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:28:56.975551 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:28:56.975558 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 6 00:28:56.975565 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:28:56.975571 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:28:56.975578 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:28:56.975584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:28:56.975591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:28:56.975600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:28:56.975607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:28:56.975613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:28:56.975620 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:28:56.975626 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:28:56.975633 kernel: TSC deadline timer available Nov 6 00:28:56.975640 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:28:56.975646 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:28:56.975653 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:28:56.975661 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:28:56.975668 kernel: CPU topo: Num. cores per package: 2 Nov 6 00:28:56.975674 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:28:56.975680 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:28:56.975687 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:28:56.975694 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 00:28:56.975700 kernel: kvm-guest: setup PV sched yield Nov 6 00:28:56.975707 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 00:28:56.975713 kernel: Booting paravirtualized kernel on KVM Nov 6 00:28:56.975720 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:28:56.975729 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:28:56.975736 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:28:56.975742 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:28:56.975749 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:28:56.975755 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:28:56.975762 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:28:56.975769 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:28:56.975776 kernel: random: crng init done Nov 6 00:28:56.975785 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:28:56.975791 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:28:56.975798 kernel: Fallback order for Node 0: 0 Nov 6 00:28:56.975804 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 6 00:28:56.975811 kernel: Policy zone: Normal Nov 6 00:28:56.975818 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:28:56.975824 kernel: software IO TLB: area num 2. Nov 6 00:28:56.975830 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:28:56.975837 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:28:56.975846 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:28:56.975852 kernel: Dynamic Preempt: voluntary Nov 6 00:28:56.975859 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:28:56.975866 kernel: rcu: RCU event tracing is enabled. Nov 6 00:28:56.976063 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:28:56.976070 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:28:56.976077 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:28:56.976083 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:28:56.976090 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:28:56.976099 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:28:56.976105 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:28:56.976119 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:28:56.976128 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:28:56.976134 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 00:28:56.976141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:28:56.976148 kernel: Console: colour VGA+ 80x25 Nov 6 00:28:56.976155 kernel: printk: legacy console [tty0] enabled Nov 6 00:28:56.976162 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:28:56.976169 kernel: ACPI: Core revision 20240827 Nov 6 00:28:56.976178 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:28:56.976185 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:28:56.976191 kernel: x2apic enabled Nov 6 00:28:56.976198 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:28:56.976205 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 00:28:56.976212 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 00:28:56.976221 kernel: kvm-guest: setup PV IPIs Nov 6 00:28:56.976228 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:28:56.976235 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 6 00:28:56.976242 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Nov 6 00:28:56.976249 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:28:56.976256 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:28:56.976262 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:28:56.976269 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:28:56.976276 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:28:56.976285 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:28:56.976292 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 6 00:28:56.976299 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:28:56.976306 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:28:56.976312 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 00:28:56.976320 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 00:28:56.976327 kernel: active return thunk: srso_alias_return_thunk Nov 6 00:28:56.976334 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 00:28:56.976342 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 6 00:28:56.976349 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 00:28:56.976356 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:28:56.976363 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:28:56.976370 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:28:56.976377 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 6 00:28:56.976383 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:28:56.976390 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 6 00:28:56.976397 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 6 00:28:56.976406 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:28:56.976413 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:28:56.976420 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:28:56.976426 kernel: landlock: Up and running. Nov 6 00:28:56.976517 kernel: SELinux: Initializing. Nov 6 00:28:56.976531 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:28:56.976539 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:28:56.976546 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 6 00:28:56.976553 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:28:56.976564 kernel: ... version: 0 Nov 6 00:28:56.976571 kernel: ... bit width: 48 Nov 6 00:28:56.976578 kernel: ... generic registers: 6 Nov 6 00:28:56.976585 kernel: ... value mask: 0000ffffffffffff Nov 6 00:28:56.976592 kernel: ... max period: 00007fffffffffff Nov 6 00:28:56.976599 kernel: ... fixed-purpose events: 0 Nov 6 00:28:56.976606 kernel: ... event mask: 000000000000003f Nov 6 00:28:56.976613 kernel: signal: max sigframe size: 3376 Nov 6 00:28:56.976619 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:28:56.976629 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:28:56.976636 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:28:56.976643 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:28:56.976649 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:28:56.976656 kernel: .... node #0, CPUs: #1 Nov 6 00:28:56.976663 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:28:56.976670 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Nov 6 00:28:56.976677 kernel: Memory: 3954904K/4193772K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 233440K reserved, 0K cma-reserved) Nov 6 00:28:56.976684 kernel: devtmpfs: initialized Nov 6 00:28:56.976693 kernel: x86/mm: Memory block size: 128MB Nov 6 00:28:56.976700 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:28:56.976707 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:28:56.976714 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:28:56.976721 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:28:56.976728 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:28:56.976735 kernel: audit: type=2000 audit(1762388934.504:1): state=initialized audit_enabled=0 res=1 Nov 6 00:28:56.976742 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:28:56.976749 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:28:56.976758 kernel: cpuidle: using governor menu Nov 6 00:28:56.976765 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:28:56.976771 kernel: dca service started, version 1.12.1 Nov 6 00:28:56.976778 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 00:28:56.976785 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 00:28:56.976792 kernel: PCI: Using configuration type 1 for base access Nov 6 00:28:56.976799 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:28:56.976806 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:28:56.976812 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:28:56.976821 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:28:56.976828 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:28:56.976835 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:28:56.976842 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:28:56.976848 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:28:56.976855 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:28:56.976862 kernel: ACPI: Interpreter enabled Nov 6 00:28:56.976869 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 00:28:56.976875 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:28:56.976884 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:28:56.976891 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:28:56.976898 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:28:56.976905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:28:56.977087 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:28:56.977216 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:28:56.977339 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:28:56.977351 kernel: PCI host bridge to bus 0000:00 Nov 6 00:28:56.977516 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:28:56.977640 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:28:56.977753 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:28:56.977862 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 6 00:28:56.977970 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 00:28:56.978079 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 6 00:28:56.978193 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:28:56.978333 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:28:56.979544 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:28:56.979682 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 6 00:28:56.979805 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 6 00:28:56.979925 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 6 00:28:56.980044 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:28:56.980181 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:28:56.980304 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 6 00:28:56.980423 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 6 00:28:56.983644 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 6 00:28:56.983787 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:28:56.984107 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 6 00:28:56.984227 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 6 00:28:56.984353 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 6 00:28:56.984514 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 6 00:28:56.984653 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:28:56.984775 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:28:56.984903 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:28:56.985207 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 6 00:28:56.985331 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 6 00:28:56.985551 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:28:56.985681 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 00:28:56.985692 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:28:56.985699 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:28:56.985706 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:28:56.985713 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:28:56.985720 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:28:56.985731 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:28:56.985738 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:28:56.985745 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:28:56.985764 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:28:56.985797 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:28:56.985807 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:28:56.986040 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:28:56.986051 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:28:56.986058 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:28:56.986070 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:28:56.986077 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:28:56.986083 kernel: iommu: Default domain type: Translated Nov 6 00:28:56.986090 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:28:56.986097 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:28:56.986104 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:28:56.986111 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 6 00:28:56.986118 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 6 00:28:56.986256 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:28:56.986383 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:28:56.987430 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:28:56.987468 kernel: vgaarb: loaded Nov 6 00:28:56.987476 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:28:56.987484 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:28:56.987491 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:28:56.987498 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:28:56.987505 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:28:56.987516 kernel: pnp: PnP ACPI init Nov 6 00:28:56.987666 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 00:28:56.987677 kernel: pnp: PnP ACPI: found 5 devices Nov 6 00:28:56.987685 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:28:56.987692 kernel: NET: Registered PF_INET protocol family Nov 6 00:28:56.987699 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:28:56.987707 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 00:28:56.987714 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:28:56.987724 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:28:56.987732 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 00:28:56.987739 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 00:28:56.987746 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:28:56.987753 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:28:56.987761 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:28:56.987768 kernel: NET: Registered PF_XDP protocol family Nov 6 00:28:56.987882 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:28:56.988084 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:28:56.988198 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:28:56.988313 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 6 00:28:56.988422 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 00:28:56.988572 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 6 00:28:56.988587 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:28:56.988595 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 00:28:56.988603 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 6 00:28:56.988610 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 6 00:28:56.988622 kernel: Initialise system trusted keyrings Nov 6 00:28:56.988629 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 00:28:56.988636 kernel: Key type asymmetric registered Nov 6 00:28:56.988643 kernel: Asymmetric key parser 'x509' registered Nov 6 00:28:56.988650 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:28:56.988657 kernel: io scheduler mq-deadline registered Nov 6 00:28:56.988664 kernel: io scheduler kyber registered Nov 6 00:28:56.988670 kernel: io scheduler bfq registered Nov 6 00:28:56.988677 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:28:56.988687 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:28:56.988694 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:28:56.988701 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:28:56.988708 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:28:56.988716 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:28:56.988723 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:28:56.988730 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:28:56.988865 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 00:28:56.988876 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:28:56.988994 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 00:28:56.989109 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T00:28:56 UTC (1762388936) Nov 6 00:28:56.989221 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 00:28:56.989230 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:28:56.989237 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:28:56.989244 kernel: Segment Routing with IPv6 Nov 6 00:28:56.989251 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:28:56.989258 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:28:56.989268 kernel: Key type dns_resolver registered Nov 6 00:28:56.989275 kernel: IPI shorthand broadcast: enabled Nov 6 00:28:56.989282 kernel: sched_clock: Marking stable (2922004530, 372271302)->(3398519198, -104243366) Nov 6 00:28:56.989289 kernel: registered taskstats version 1 Nov 6 00:28:56.989295 kernel: Loading compiled-in X.509 certificates Nov 6 00:28:56.989302 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:28:56.989309 kernel: Demotion targets for Node 0: null Nov 6 00:28:56.989316 kernel: Key type .fscrypt registered Nov 6 00:28:56.989323 kernel: Key type fscrypt-provisioning registered Nov 6 00:28:56.989332 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:28:56.989339 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:28:56.989346 kernel: ima: No architecture policies found Nov 6 00:28:56.989353 kernel: clk: Disabling unused clocks Nov 6 00:28:56.989360 kernel: Warning: unable to open an initial console. Nov 6 00:28:56.989367 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:28:56.989374 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:28:56.989381 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:28:56.989387 kernel: Run /init as init process Nov 6 00:28:56.989397 kernel: with arguments: Nov 6 00:28:56.989404 kernel: /init Nov 6 00:28:56.989411 kernel: with environment: Nov 6 00:28:56.995756 kernel: HOME=/ Nov 6 00:28:56.995775 kernel: TERM=linux Nov 6 00:28:56.995785 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:28:56.995795 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:28:56.995806 systemd[1]: Detected virtualization kvm. Nov 6 00:28:56.995814 systemd[1]: Detected architecture x86-64. Nov 6 00:28:56.995822 systemd[1]: Running in initrd. Nov 6 00:28:56.995830 systemd[1]: No hostname configured, using default hostname. Nov 6 00:28:56.995838 systemd[1]: Hostname set to . Nov 6 00:28:56.995846 systemd[1]: Initializing machine ID from random generator. Nov 6 00:28:56.995853 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:28:56.995861 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:28:56.995869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:28:56.995880 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:28:56.996038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:28:56.996046 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:28:56.996054 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:28:56.996064 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:28:56.996072 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:28:56.996082 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:28:56.996090 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:28:56.996098 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:28:56.996106 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:28:56.996114 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:28:56.996122 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:28:56.996129 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:28:56.996137 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:28:56.996145 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:28:56.996188 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:28:56.996244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:28:56.996293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:28:56.996359 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:28:56.996407 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:28:56.996503 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:28:56.996523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:28:56.996532 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:28:56.996540 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:28:56.996549 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:28:56.996557 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:28:56.996565 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:28:56.996573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:28:56.996581 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:28:56.996617 systemd-journald[187]: Collecting audit messages is disabled. Nov 6 00:28:56.996639 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:28:56.996647 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:28:56.996655 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:28:56.996664 systemd-journald[187]: Journal started Nov 6 00:28:56.996681 systemd-journald[187]: Runtime Journal (/run/log/journal/2212f7ce6d9b4409becf2c0767b191a5) is 8M, max 78.2M, 70.2M free. Nov 6 00:28:56.961900 systemd-modules-load[188]: Inserted module 'overlay' Nov 6 00:28:57.004470 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:28:57.030482 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:28:57.032219 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 6 00:28:57.117943 kernel: Bridge firewalling registered Nov 6 00:28:57.119572 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:28:57.120703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:28:57.122856 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:28:57.128628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:28:57.132577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:28:57.138543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:28:57.142942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:28:57.156092 systemd-tmpfiles[211]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:28:57.161470 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:28:57.164946 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:28:57.167912 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:28:57.171702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:28:57.175875 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:28:57.179842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:28:57.201703 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:28:57.225772 systemd-resolved[226]: Positive Trust Anchors: Nov 6 00:28:57.226992 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:28:57.227024 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:28:57.230683 systemd-resolved[226]: Defaulting to hostname 'linux'. Nov 6 00:28:57.236414 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:28:57.239415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:28:57.309494 kernel: SCSI subsystem initialized Nov 6 00:28:57.320467 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:28:57.333470 kernel: iscsi: registered transport (tcp) Nov 6 00:28:57.356228 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:28:57.356275 kernel: QLogic iSCSI HBA Driver Nov 6 00:28:57.378118 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:28:57.395998 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:28:57.399878 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:28:57.456229 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:28:57.460198 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:28:57.519471 kernel: raid6: avx2x4 gen() 32031 MB/s Nov 6 00:28:57.538488 kernel: raid6: avx2x2 gen() 29910 MB/s Nov 6 00:28:57.559109 kernel: raid6: avx2x1 gen() 21936 MB/s Nov 6 00:28:57.559135 kernel: raid6: using algorithm avx2x4 gen() 32031 MB/s Nov 6 00:28:57.578488 kernel: raid6: .... xor() 4512 MB/s, rmw enabled Nov 6 00:28:57.578513 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:28:57.600478 kernel: xor: automatically using best checksumming function avx Nov 6 00:28:57.751477 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:28:57.759451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:28:57.762807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:28:57.785574 systemd-udevd[436]: Using default interface naming scheme 'v255'. Nov 6 00:28:57.792387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:28:57.797533 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:28:57.826848 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Nov 6 00:28:57.859181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:28:57.861696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:28:57.943806 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:28:57.947253 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:28:58.029460 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 6 00:28:58.033462 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:28:58.214521 kernel: scsi host0: Virtio SCSI HBA Nov 6 00:28:58.215319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:28:58.215528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:28:58.226633 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 6 00:28:58.218146 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:28:58.233832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:28:58.237304 kernel: libata version 3.00 loaded. Nov 6 00:28:58.239861 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:28:58.317521 kernel: AES CTR mode by8 optimization enabled Nov 6 00:28:58.319774 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:28:58.343534 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 6 00:28:58.343796 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 6 00:28:58.344143 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 6 00:28:58.344293 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 6 00:28:58.345456 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 6 00:28:58.354472 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:28:58.354521 kernel: GPT:9289727 != 167739391 Nov 6 00:28:58.354534 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:28:58.354545 kernel: GPT:9289727 != 167739391 Nov 6 00:28:58.354555 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:28:58.354565 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:28:58.354575 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 6 00:28:58.370477 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:28:58.371562 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:28:58.373686 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:28:58.373967 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:28:58.374120 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:28:58.377494 kernel: scsi host1: ahci Nov 6 00:28:58.384479 kernel: scsi host2: ahci Nov 6 00:28:58.388454 kernel: scsi host3: ahci Nov 6 00:28:58.389473 kernel: scsi host4: ahci Nov 6 00:28:58.397640 kernel: scsi host5: ahci Nov 6 00:28:58.401476 kernel: scsi host6: ahci Nov 6 00:28:58.402275 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Nov 6 00:28:58.402290 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Nov 6 00:28:58.402301 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Nov 6 00:28:58.402312 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Nov 6 00:28:58.402322 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Nov 6 00:28:58.402332 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Nov 6 00:28:58.456453 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 6 00:28:58.530586 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:28:58.546426 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 6 00:28:58.554100 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 6 00:28:58.555292 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 6 00:28:58.566131 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 6 00:28:58.569080 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:28:58.604495 disk-uuid[607]: Primary Header is updated. Nov 6 00:28:58.604495 disk-uuid[607]: Secondary Entries is updated. Nov 6 00:28:58.604495 disk-uuid[607]: Secondary Header is updated. Nov 6 00:28:58.621477 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:28:58.644479 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:28:58.724467 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:28:58.724522 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:28:58.724534 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:28:58.724545 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 6 00:28:58.724555 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 00:28:58.729452 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:28:58.834460 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:28:58.863192 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:28:58.864402 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:28:58.867259 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:28:58.871564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:28:58.910665 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:28:59.642506 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:28:59.644941 disk-uuid[608]: The operation has completed successfully. Nov 6 00:28:59.701370 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:28:59.701549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:28:59.736567 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:28:59.752290 sh[637]: Success Nov 6 00:28:59.776630 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:28:59.776671 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:28:59.783483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:28:59.794461 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 6 00:28:59.843551 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:28:59.847527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:28:59.864240 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:28:59.877459 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (649) Nov 6 00:28:59.877491 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:28:59.881729 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:28:59.897311 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:28:59.897344 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:28:59.897359 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:28:59.902272 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:28:59.903533 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:28:59.905027 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:28:59.905752 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:28:59.910155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:28:59.941703 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (682) Nov 6 00:28:59.945749 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:28:59.949565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:28:59.956579 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:28:59.956611 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:28:59.960949 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:28:59.969516 kernel: BTRFS info (device sda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:28:59.971143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:28:59.975173 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:29:00.043198 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:29:00.057571 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:29:00.103276 ignition[761]: Ignition 2.22.0 Nov 6 00:29:00.103288 ignition[761]: Stage: fetch-offline Nov 6 00:29:00.103323 ignition[761]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:00.103334 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 6 00:29:00.103474 ignition[761]: parsed url from cmdline: "" Nov 6 00:29:00.103480 ignition[761]: no config URL provided Nov 6 00:29:00.103487 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:29:00.103498 ignition[761]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:29:00.109790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:29:00.103503 ignition[761]: failed to fetch config: resource requires networking Nov 6 00:29:00.112157 systemd-networkd[820]: lo: Link UP Nov 6 00:29:00.103647 ignition[761]: Ignition finished successfully Nov 6 00:29:00.112161 systemd-networkd[820]: lo: Gained carrier Nov 6 00:29:00.116103 systemd-networkd[820]: Enumeration completed Nov 6 00:29:00.116186 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:29:00.117652 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:29:00.117657 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:29:00.119135 systemd[1]: Reached target network.target - Network. Nov 6 00:29:00.119883 systemd-networkd[820]: eth0: Link UP Nov 6 00:29:00.120060 systemd-networkd[820]: eth0: Gained carrier Nov 6 00:29:00.120071 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:29:00.123710 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:29:00.153490 ignition[829]: Ignition 2.22.0 Nov 6 00:29:00.154526 ignition[829]: Stage: fetch Nov 6 00:29:00.154649 ignition[829]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:00.154669 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 6 00:29:00.154743 ignition[829]: parsed url from cmdline: "" Nov 6 00:29:00.154747 ignition[829]: no config URL provided Nov 6 00:29:00.154753 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:29:00.154761 ignition[829]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:29:00.154795 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 6 00:29:00.155557 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 6 00:29:00.356207 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 6 00:29:00.356360 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 6 00:29:00.757034 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 6 00:29:00.757496 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 6 00:29:01.396501 systemd-networkd[820]: eth0: DHCPv4 address 172.232.1.216/24, gateway 172.232.1.1 acquired from 23.33.176.69 Nov 6 00:29:01.493643 systemd-networkd[820]: eth0: Gained IPv6LL Nov 6 00:29:01.558389 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 6 00:29:01.655169 ignition[829]: PUT result: OK Nov 6 00:29:01.656154 ignition[829]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 6 00:29:01.767224 ignition[829]: GET result: OK Nov 6 00:29:01.767356 ignition[829]: parsing config with SHA512: 9bc15064824ff038e9a77fa00a8e5a5cf5470fab1d237b89c549214cc7c63c95f1e5d9f914e255ad90a0d94597f7a9239423b34cd10eb044a12d4c8cdac194c7 Nov 6 00:29:01.772828 unknown[829]: fetched base config from "system" Nov 6 00:29:01.772853 unknown[829]: fetched base config from "system" Nov 6 00:29:01.773171 ignition[829]: fetch: fetch complete Nov 6 00:29:01.772860 unknown[829]: fetched user config from "akamai" Nov 6 00:29:01.773177 ignition[829]: fetch: fetch passed Nov 6 00:29:01.777417 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:29:01.773221 ignition[829]: Ignition finished successfully Nov 6 00:29:01.800555 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:29:01.827881 ignition[836]: Ignition 2.22.0 Nov 6 00:29:01.827891 ignition[836]: Stage: kargs Nov 6 00:29:01.828195 ignition[836]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:01.830509 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:29:01.828205 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 6 00:29:01.828810 ignition[836]: kargs: kargs passed Nov 6 00:29:01.828851 ignition[836]: Ignition finished successfully Nov 6 00:29:01.834566 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:29:01.862669 ignition[842]: Ignition 2.22.0 Nov 6 00:29:01.862696 ignition[842]: Stage: disks Nov 6 00:29:01.862825 ignition[842]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:01.865816 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:29:01.862836 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 6 00:29:01.867493 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:29:01.863518 ignition[842]: disks: disks passed Nov 6 00:29:01.869050 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:29:01.863563 ignition[842]: Ignition finished successfully Nov 6 00:29:01.871407 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:29:01.873586 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:29:01.875350 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:29:01.878530 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:29:01.912649 systemd-fsck[851]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 6 00:29:01.917147 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:29:01.920533 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:29:02.037482 kernel: EXT4-fs (sda9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:29:02.038168 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:29:02.039747 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:29:02.042479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:29:02.045311 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:29:02.048743 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:29:02.050385 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:29:02.050409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:29:02.059917 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:29:02.063638 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:29:02.071244 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (859) Nov 6 00:29:02.071297 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:29:02.074895 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:29:02.084791 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:29:02.084833 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:29:02.084845 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:29:02.089548 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:29:02.141280 initrd-setup-root[883]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:29:02.149020 initrd-setup-root[890]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:29:02.155336 initrd-setup-root[897]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:29:02.161498 initrd-setup-root[904]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:29:02.282933 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:29:02.287062 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:29:02.290557 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:29:02.305188 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:29:02.311466 kernel: BTRFS info (device sda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:29:02.325679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:29:02.346611 ignition[972]: INFO : Ignition 2.22.0 Nov 6 00:29:02.349413 ignition[972]: INFO : Stage: mount Nov 6 00:29:02.349413 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:02.349413 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 6 00:29:02.349413 ignition[972]: INFO : mount: mount passed Nov 6 00:29:02.349413 ignition[972]: INFO : Ignition finished successfully Nov 6 00:29:02.351846 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:29:02.356551 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:29:03.039522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:29:03.067478 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (983) Nov 6 00:29:03.072466 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:29:03.072489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:29:03.085420 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:29:03.085536 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:29:03.085555 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:29:03.090691 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:29:03.125815 ignition[999]: INFO : Ignition 2.22.0 Nov 6 00:29:03.125815 ignition[999]: INFO : Stage: files Nov 6 00:29:03.128862 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:03.128862 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 6 00:29:03.128862 ignition[999]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:29:03.132632 ignition[999]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:29:03.132632 ignition[999]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:29:03.135727 ignition[999]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:29:03.135727 ignition[999]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:29:03.138531 ignition[999]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:29:03.136077 unknown[999]: wrote ssh authorized keys file for user: core Nov 6 00:29:03.141375 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:29:03.141375 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:29:03.375110 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:29:03.546817 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:29:03.548630 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:29:03.559935 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:29:03.559935 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:29:03.559935 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:29:03.559935 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:29:03.559935 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:29:03.559935 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 00:29:04.065390 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:29:04.564849 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:29:04.564849 ignition[999]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:29:04.568477 ignition[999]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:29:04.570084 ignition[999]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:29:04.570084 ignition[999]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:29:04.570084 ignition[999]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:29:04.570084 ignition[999]: INFO : files: files passed Nov 6 00:29:04.570084 ignition[999]: INFO : Ignition finished successfully Nov 6 00:29:04.572232 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:29:04.576567 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:29:04.582606 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:29:04.590208 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:29:04.594492 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:29:04.605178 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:29:04.606745 initrd-setup-root-after-ignition[1029]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:29:04.608698 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:29:04.610339 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:29:04.612400 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:29:04.614750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:29:04.660664 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:29:04.660833 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:29:04.663273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:29:04.664856 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:29:04.666925 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:29:04.667684 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:29:04.690185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:29:04.694462 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:29:04.716555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:29:04.717781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:29:04.719914 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:29:04.722116 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:29:04.722255 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:29:04.725027 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:29:04.726546 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:29:04.728519 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:29:04.730723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:29:04.732721 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:29:04.734719 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:29:04.736891 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:29:04.738894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:29:04.741106 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:29:04.743186 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:29:04.745246 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:29:04.747108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:29:04.747241 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:29:04.749737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:29:04.751314 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:29:04.753402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:29:04.753544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:29:04.755700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:29:04.755796 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:29:04.758923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:29:04.759074 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:29:04.760308 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:29:04.760465 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:29:04.764532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:29:04.772810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:29:04.773885 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:29:04.774036 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:29:04.775593 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:29:04.775730 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:29:04.784895 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:29:04.785001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:29:04.806520 ignition[1053]: INFO : Ignition 2.22.0 Nov 6 00:29:04.806520 ignition[1053]: INFO : Stage: umount Nov 6 00:29:04.811016 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:29:04.833634 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:04.833634 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 6 00:29:04.833634 ignition[1053]: INFO : umount: umount passed Nov 6 00:29:04.833634 ignition[1053]: INFO : Ignition finished successfully Nov 6 00:29:04.814050 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:29:04.814164 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:29:04.831105 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:29:04.831207 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:29:04.833097 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:29:04.833188 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:29:04.834638 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:29:04.834698 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:29:04.836418 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:29:04.836557 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:29:04.838409 systemd[1]: Stopped target network.target - Network. Nov 6 00:29:04.840060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:29:04.840114 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:29:04.841958 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:29:04.843672 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:29:04.844500 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:29:04.845546 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:29:04.847269 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:29:04.849033 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:29:04.849082 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:29:04.850809 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:29:04.850854 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:29:04.852616 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:29:04.852671 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:29:04.854404 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:29:04.854486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:29:04.856240 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:29:04.856291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:29:04.858169 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:29:04.859948 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:29:04.867046 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:29:04.867186 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:29:04.873017 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:29:04.873287 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:29:04.873403 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:29:04.876676 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:29:04.877231 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:29:04.879295 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:29:04.879335 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:29:04.882042 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:29:04.883887 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:29:04.883940 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:29:04.886973 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:29:04.887024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:29:04.890538 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:29:04.890589 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:29:04.891928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:29:04.891978 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:29:04.896557 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:29:04.901660 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:29:04.901725 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:29:04.916089 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:29:04.916211 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:29:04.918537 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:29:04.918716 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:29:04.920886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:29:04.920951 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:29:04.922674 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:29:04.922715 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:29:04.925049 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:29:04.925099 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:29:04.928084 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:29:04.928132 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:29:04.929943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:29:04.929996 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:29:04.933548 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:29:04.935239 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:29:04.935294 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:29:04.938402 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:29:04.938545 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:29:04.940483 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:29:04.940554 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:29:04.942548 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:29:04.942596 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:29:04.944118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:29:04.944167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:29:04.949701 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:29:04.949759 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 6 00:29:04.949802 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:29:04.949855 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:29:04.956715 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:29:04.956827 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:29:04.958945 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:29:04.961493 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:29:04.980357 systemd[1]: Switching root. Nov 6 00:29:05.014417 systemd-journald[187]: Journal stopped Nov 6 00:29:06.325785 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 6 00:29:06.325815 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:29:06.325828 kernel: SELinux: policy capability open_perms=1 Nov 6 00:29:06.325837 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:29:06.325846 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:29:06.325857 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:29:06.325867 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:29:06.325877 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:29:06.325886 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:29:06.325895 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:29:06.325904 kernel: audit: type=1403 audit(1762388945.175:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:29:06.325914 systemd[1]: Successfully loaded SELinux policy in 76.449ms. Nov 6 00:29:06.325927 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.796ms. Nov 6 00:29:06.325938 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:29:06.325949 systemd[1]: Detected virtualization kvm. Nov 6 00:29:06.325959 systemd[1]: Detected architecture x86-64. Nov 6 00:29:06.326120 systemd[1]: Detected first boot. Nov 6 00:29:06.326130 systemd[1]: Initializing machine ID from random generator. Nov 6 00:29:06.326140 zram_generator::config[1097]: No configuration found. Nov 6 00:29:06.326150 kernel: Guest personality initialized and is inactive Nov 6 00:29:06.326161 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:29:06.326171 kernel: Initialized host personality Nov 6 00:29:06.326180 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:29:06.326192 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:29:06.326203 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:29:06.326212 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:29:06.326222 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:29:06.326232 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:29:06.326242 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:29:06.326252 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:29:06.326264 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:29:06.326274 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:29:06.326284 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:29:06.326295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:29:06.326305 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:29:06.326315 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:29:06.326325 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:29:06.326335 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:29:06.326347 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:29:06.326357 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:29:06.326370 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:29:06.326382 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:29:06.326392 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:29:06.326402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:29:06.326413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:29:06.326423 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:29:06.326461 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:29:06.326477 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:29:06.326488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:29:06.326498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:29:06.326509 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:29:06.326519 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:29:06.326529 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:29:06.326539 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:29:06.326553 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:29:06.326564 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:29:06.326574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:29:06.326584 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:29:06.326595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:29:06.326607 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:29:06.326618 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:29:06.326628 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:29:06.326640 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:29:06.326651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:06.326661 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:29:06.326671 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:29:06.326681 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:29:06.326694 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:29:06.326705 systemd[1]: Reached target machines.target - Containers. Nov 6 00:29:06.326715 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:29:06.326726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:06.326736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:29:06.326746 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:29:06.326757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:29:06.326767 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:29:06.326779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:29:06.326790 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:29:06.326800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:29:06.326810 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:29:06.326821 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:29:06.326831 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:29:06.326841 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:29:06.326851 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:29:06.326863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:06.326875 kernel: ACPI: bus type drm_connector registered Nov 6 00:29:06.326886 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:29:06.326896 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:29:06.326906 kernel: fuse: init (API version 7.41) Nov 6 00:29:06.326916 kernel: loop: module loaded Nov 6 00:29:06.326926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:29:06.326936 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:29:06.326947 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:29:06.326959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:29:06.326969 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:29:06.326979 systemd[1]: Stopped verity-setup.service. Nov 6 00:29:06.326990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:06.327024 systemd-journald[1188]: Collecting audit messages is disabled. Nov 6 00:29:06.327047 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:29:06.327058 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:29:06.327068 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:29:06.327079 systemd-journald[1188]: Journal started Nov 6 00:29:06.327097 systemd-journald[1188]: Runtime Journal (/run/log/journal/4203a5d9cd3e40a48a9a0f2e19774ce9) is 8M, max 78.2M, 70.2M free. Nov 6 00:29:05.874555 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:29:05.888503 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 00:29:05.889214 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:29:06.331791 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:29:06.333167 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:29:06.334206 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:29:06.335235 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:29:06.336523 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:29:06.337957 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:29:06.339536 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:29:06.339837 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:29:06.341555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:29:06.342027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:29:06.343403 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:29:06.343863 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:29:06.345577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:29:06.345871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:29:06.347345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:29:06.347819 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:29:06.349519 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:29:06.349809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:29:06.351532 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:29:06.353094 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:29:06.354756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:29:06.356330 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:29:06.374711 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:29:06.378568 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:29:06.382551 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:29:06.384502 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:29:06.384593 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:29:06.387228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:29:06.394554 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:29:06.396022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:06.402584 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:29:06.405722 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:29:06.407708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:29:06.409028 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:29:06.411562 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:29:06.414828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:29:06.419517 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:29:06.428335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:29:06.440692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:29:06.442636 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:29:06.457383 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:29:06.460123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:29:06.469762 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:29:06.477239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:29:06.483620 systemd-journald[1188]: Time spent on flushing to /var/log/journal/4203a5d9cd3e40a48a9a0f2e19774ce9 is 78.629ms for 1014 entries. Nov 6 00:29:06.483620 systemd-journald[1188]: System Journal (/var/log/journal/4203a5d9cd3e40a48a9a0f2e19774ce9) is 8M, max 195.6M, 187.6M free. Nov 6 00:29:06.573723 systemd-journald[1188]: Received client request to flush runtime journal. Nov 6 00:29:06.573763 kernel: loop0: detected capacity change from 0 to 8 Nov 6 00:29:06.573959 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:29:06.573972 kernel: loop1: detected capacity change from 0 to 219144 Nov 6 00:29:06.537403 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Nov 6 00:29:06.537422 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Nov 6 00:29:06.543747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:29:06.556268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:29:06.562164 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:29:06.565319 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:29:06.581605 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:29:06.613468 kernel: loop2: detected capacity change from 0 to 128016 Nov 6 00:29:06.659826 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:29:06.668557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:29:06.673477 kernel: loop3: detected capacity change from 0 to 110984 Nov 6 00:29:06.706155 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Nov 6 00:29:06.708874 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Nov 6 00:29:06.718831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:29:06.725934 kernel: loop4: detected capacity change from 0 to 8 Nov 6 00:29:06.731498 kernel: loop5: detected capacity change from 0 to 219144 Nov 6 00:29:06.761466 kernel: loop6: detected capacity change from 0 to 128016 Nov 6 00:29:06.779499 kernel: loop7: detected capacity change from 0 to 110984 Nov 6 00:29:06.796614 (sd-merge)[1248]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 6 00:29:06.797266 (sd-merge)[1248]: Merged extensions into '/usr'. Nov 6 00:29:06.803495 systemd[1]: Reload requested from client PID 1222 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:29:06.803579 systemd[1]: Reloading... Nov 6 00:29:06.892640 zram_generator::config[1271]: No configuration found. Nov 6 00:29:06.989373 ldconfig[1217]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:29:07.154834 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:29:07.155291 systemd[1]: Reloading finished in 350 ms. Nov 6 00:29:07.188402 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:29:07.190033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:29:07.191375 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:29:07.201697 systemd[1]: Starting ensure-sysext.service... Nov 6 00:29:07.205554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:29:07.213679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:29:07.232702 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:29:07.232980 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:29:07.233340 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:29:07.233725 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:29:07.234690 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:29:07.235181 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Nov 6 00:29:07.235350 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Nov 6 00:29:07.237521 systemd[1]: Reload requested from client PID 1319 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:29:07.237543 systemd[1]: Reloading... Nov 6 00:29:07.241546 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:29:07.241559 systemd-tmpfiles[1320]: Skipping /boot Nov 6 00:29:07.257691 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:29:07.257703 systemd-tmpfiles[1320]: Skipping /boot Nov 6 00:29:07.277976 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Nov 6 00:29:07.340501 zram_generator::config[1347]: No configuration found. Nov 6 00:29:07.608472 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:29:07.625468 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:29:07.663693 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:29:07.664203 systemd[1]: Reloading finished in 426 ms. Nov 6 00:29:07.669552 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:29:07.673589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:29:07.676509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:29:07.718460 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:29:07.718725 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:29:07.744349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:07.746671 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:29:07.750749 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:29:07.752631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:07.755521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:29:07.758620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:29:07.761689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:29:07.763305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:07.763671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:07.768901 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:29:07.776119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:29:07.784790 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:29:07.789186 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:29:07.791025 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:07.796903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:07.797061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:07.797208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:07.797279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:07.797347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:07.802417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:07.803677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:07.808514 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:29:07.810608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:07.810704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:07.810811 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:07.811605 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:29:07.816158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:29:07.818083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:29:07.818792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:29:07.832713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:29:07.836271 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:29:07.839083 systemd[1]: Finished ensure-sysext.service. Nov 6 00:29:07.846806 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:29:07.854900 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:29:07.860969 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:29:07.871690 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:29:07.871980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:29:07.881302 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:29:07.881762 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:29:07.887847 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:29:07.890997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:29:07.900884 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:29:07.917693 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:29:07.938909 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:29:07.944264 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:29:07.960160 augenrules[1487]: No rules Nov 6 00:29:07.961664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 6 00:29:07.966530 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:29:07.966788 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:29:07.970191 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:29:07.974615 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:29:08.014961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:29:08.058875 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:29:08.167799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:29:08.216541 systemd-networkd[1449]: lo: Link UP Nov 6 00:29:08.216559 systemd-networkd[1449]: lo: Gained carrier Nov 6 00:29:08.218584 systemd-networkd[1449]: Enumeration completed Nov 6 00:29:08.218688 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:29:08.221490 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:29:08.221507 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:29:08.221953 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:29:08.222136 systemd-resolved[1451]: Positive Trust Anchors: Nov 6 00:29:08.222351 systemd-resolved[1451]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:29:08.222417 systemd-resolved[1451]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:29:08.224214 systemd-networkd[1449]: eth0: Link UP Nov 6 00:29:08.224613 systemd-networkd[1449]: eth0: Gained carrier Nov 6 00:29:08.224688 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:29:08.226836 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:29:08.228225 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:29:08.231033 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:29:08.231684 systemd-resolved[1451]: Defaulting to hostname 'linux'. Nov 6 00:29:08.233684 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:29:08.234747 systemd[1]: Reached target network.target - Network. Nov 6 00:29:08.235861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:29:08.237091 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:29:08.238149 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:29:08.239171 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:29:08.240268 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:29:08.241587 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:29:08.242740 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:29:08.243793 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:29:08.244802 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:29:08.244887 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:29:08.245774 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:29:08.248396 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:29:08.251771 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:29:08.255242 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:29:08.256598 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:29:08.257601 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:29:08.265622 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:29:08.266876 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:29:08.290815 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:29:08.292869 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:29:08.293761 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:29:08.294771 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:29:08.295012 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:29:08.298394 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:29:08.307330 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:29:08.311649 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:29:08.316951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:29:08.319933 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:29:08.326799 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:29:08.330689 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:29:08.336813 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:29:08.340599 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:29:08.347793 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:29:08.353268 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:29:08.363888 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:29:08.371625 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:29:08.374313 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:29:08.374798 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:29:08.383608 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:29:08.393458 jq[1517]: false Nov 6 00:29:08.399588 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:29:08.402354 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:29:08.408478 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Nov 6 00:29:08.412646 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:29:08.414326 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:29:08.415183 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:29:08.416529 update_engine[1533]: I20251106 00:29:08.415686 1533 main.cc:92] Flatcar Update Engine starting Nov 6 00:29:08.415210 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Nov 6 00:29:08.429133 jq[1537]: true Nov 6 00:29:08.429245 coreos-metadata[1514]: Nov 06 00:29:08.425 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 6 00:29:08.417212 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:29:08.430593 extend-filesystems[1518]: Found /dev/sda6 Nov 6 00:29:08.417615 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:29:08.444491 extend-filesystems[1518]: Found /dev/sda9 Nov 6 00:29:08.447203 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Nov 6 00:29:08.447203 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:29:08.447203 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Nov 6 00:29:08.447203 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Nov 6 00:29:08.447203 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:29:08.437735 oslogin_cache_refresh[1519]: Failure getting users, quitting Nov 6 00:29:08.447325 jq[1541]: true Nov 6 00:29:08.446833 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:29:08.437751 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:29:08.437789 oslogin_cache_refresh[1519]: Refreshing group entry cache Nov 6 00:29:08.442807 oslogin_cache_refresh[1519]: Failure getting groups, quitting Nov 6 00:29:08.442817 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:29:08.451077 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:29:08.454975 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:29:08.456082 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:29:08.461852 extend-filesystems[1518]: Checking size of /dev/sda9 Nov 6 00:29:08.469718 tar[1540]: linux-amd64/LICENSE Nov 6 00:29:08.469718 tar[1540]: linux-amd64/helm Nov 6 00:29:08.498638 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:29:08.513072 extend-filesystems[1518]: Resized partition /dev/sda9 Nov 6 00:29:08.521749 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:29:08.527331 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:29:08.526520 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:29:08.526309 dbus-daemon[1515]: [system] SELinux support is enabled Nov 6 00:29:08.541476 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 6 00:29:08.534470 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:29:08.544680 update_engine[1533]: I20251106 00:29:08.543994 1533 update_check_scheduler.cc:74] Next update check in 4m23s Nov 6 00:29:08.546242 systemd[1]: Starting sshkeys.service... Nov 6 00:29:08.547082 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:29:08.549404 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:29:08.550423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:29:08.550472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:29:08.553631 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:29:08.557632 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:29:08.581667 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 00:29:08.585679 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 00:29:08.605534 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:29:08.605574 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:29:08.609655 systemd-logind[1532]: New seat seat0. Nov 6 00:29:08.610429 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:29:08.752689 coreos-metadata[1585]: Nov 06 00:29:08.752 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 6 00:29:08.871463 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 6 00:29:08.890201 extend-filesystems[1579]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 6 00:29:08.890201 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 6 00:29:08.890201 extend-filesystems[1579]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 6 00:29:08.889921 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:29:08.906758 containerd[1556]: time="2025-11-06T00:29:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:29:08.906758 containerd[1556]: time="2025-11-06T00:29:08.901130386Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:29:08.906919 extend-filesystems[1518]: Resized filesystem in /dev/sda9 Nov 6 00:29:08.891430 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:29:08.940851 containerd[1556]: time="2025-11-06T00:29:08.939359135Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.06µs" Nov 6 00:29:08.941136 containerd[1556]: time="2025-11-06T00:29:08.941112166Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:29:08.941208 containerd[1556]: time="2025-11-06T00:29:08.941192786Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:29:08.941431 containerd[1556]: time="2025-11-06T00:29:08.941412316Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:29:08.942085 containerd[1556]: time="2025-11-06T00:29:08.942065696Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:29:08.942167 containerd[1556]: time="2025-11-06T00:29:08.942150996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.942273196Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.942291676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.942623757Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.942650827Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.942664347Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.942675157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.942785297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.943040137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.943074567Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:29:08.943530 containerd[1556]: time="2025-11-06T00:29:08.943086557Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:29:08.945624 containerd[1556]: time="2025-11-06T00:29:08.945601128Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:29:08.949803 containerd[1556]: time="2025-11-06T00:29:08.947426659Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:29:08.950242 containerd[1556]: time="2025-11-06T00:29:08.950131930Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:29:08.957206 containerd[1556]: time="2025-11-06T00:29:08.957173134Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:29:08.957362 containerd[1556]: time="2025-11-06T00:29:08.957342784Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:29:08.957625 containerd[1556]: time="2025-11-06T00:29:08.957606134Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:29:08.958644 containerd[1556]: time="2025-11-06T00:29:08.957711784Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:29:08.958644 containerd[1556]: time="2025-11-06T00:29:08.957733604Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:29:08.958644 containerd[1556]: time="2025-11-06T00:29:08.957745014Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:29:08.958644 containerd[1556]: time="2025-11-06T00:29:08.958494525Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:29:08.958644 containerd[1556]: time="2025-11-06T00:29:08.958514785Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:29:08.958644 containerd[1556]: time="2025-11-06T00:29:08.958525475Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:29:08.959076 containerd[1556]: time="2025-11-06T00:29:08.958618955Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:29:08.959076 containerd[1556]: time="2025-11-06T00:29:08.958838305Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:29:08.959076 containerd[1556]: time="2025-11-06T00:29:08.958854475Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:29:08.959713 containerd[1556]: time="2025-11-06T00:29:08.959206365Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:29:08.959713 containerd[1556]: time="2025-11-06T00:29:08.959646875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:29:08.959713 containerd[1556]: time="2025-11-06T00:29:08.959677195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:29:08.959813 containerd[1556]: time="2025-11-06T00:29:08.959793775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:29:08.960070 containerd[1556]: time="2025-11-06T00:29:08.960051005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:29:08.960178 containerd[1556]: time="2025-11-06T00:29:08.960158725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:29:08.960345 containerd[1556]: time="2025-11-06T00:29:08.960328525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:29:08.960679 containerd[1556]: time="2025-11-06T00:29:08.960465376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:29:08.960761 containerd[1556]: time="2025-11-06T00:29:08.960733216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:29:08.961349 containerd[1556]: time="2025-11-06T00:29:08.960954246Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:29:08.961349 containerd[1556]: time="2025-11-06T00:29:08.960975336Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:29:08.961349 containerd[1556]: time="2025-11-06T00:29:08.961308046Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:29:08.961729 containerd[1556]: time="2025-11-06T00:29:08.961326826Z" level=info msg="Start snapshots syncer" Nov 6 00:29:08.962114 containerd[1556]: time="2025-11-06T00:29:08.962094706Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:29:08.967545 containerd[1556]: time="2025-11-06T00:29:08.965149338Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:29:08.967545 containerd[1556]: time="2025-11-06T00:29:08.965232078Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965379168Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965595038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965615898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965627398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965675978Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965689938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965699778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965709738Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965762808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965779198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965788988Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965865248Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965950208Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:29:08.967695 containerd[1556]: time="2025-11-06T00:29:08.965966838Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.965977468Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.965984678Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.965993258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.966007388Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.967199219Z" level=info msg="runtime interface created" Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.967207639Z" level=info msg="created NRI interface" Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.967216829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.967227009Z" level=info msg="Connect containerd service" Nov 6 00:29:08.967910 containerd[1556]: time="2025-11-06T00:29:08.967280829Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:29:08.973853 containerd[1556]: time="2025-11-06T00:29:08.971819601Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:29:08.986512 systemd-networkd[1449]: eth0: DHCPv4 address 172.232.1.216/24, gateway 172.232.1.1 acquired from 23.33.176.69 Nov 6 00:29:08.987049 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1449 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 6 00:29:08.987885 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 6 00:29:08.992495 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 6 00:29:09.023709 locksmithd[1583]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:29:10.312826 systemd-resolved[1451]: Clock change detected. Flushing caches. Nov 6 00:29:10.314210 systemd-timesyncd[1464]: Contacted time server 168.235.69.132:123 (0.flatcar.pool.ntp.org). Nov 6 00:29:10.314407 systemd-timesyncd[1464]: Initial clock synchronization to Thu 2025-11-06 00:29:10.312777 UTC. Nov 6 00:29:10.378401 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:29:10.380245 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 6 00:29:10.385260 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 6 00:29:10.387242 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1606 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 6 00:29:10.394610 systemd[1]: Starting polkit.service - Authorization Manager... Nov 6 00:29:10.407303 tar[1540]: linux-amd64/README.md Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410197268Z" level=info msg="Start subscribing containerd event" Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410253448Z" level=info msg="Start recovering state" Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410339328Z" level=info msg="Start event monitor" Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410351688Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410358648Z" level=info msg="Start streaming server" Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410373408Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410380138Z" level=info msg="runtime interface starting up..." Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410385638Z" level=info msg="starting plugins..." Nov 6 00:29:10.410692 containerd[1556]: time="2025-11-06T00:29:10.410399898Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:29:10.411236 containerd[1556]: time="2025-11-06T00:29:10.411217698Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:29:10.411862 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:29:10.412228 containerd[1556]: time="2025-11-06T00:29:10.412118959Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:29:10.414469 containerd[1556]: time="2025-11-06T00:29:10.414453630Z" level=info msg="containerd successfully booted in 0.330977s" Nov 6 00:29:10.415841 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:29:10.423226 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:29:10.427217 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:29:10.445265 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:29:10.445558 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:29:10.451897 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:29:10.469840 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:29:10.474495 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:29:10.480326 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:29:10.482476 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:29:10.503566 polkitd[1624]: Started polkitd version 126 Nov 6 00:29:10.506857 polkitd[1624]: Loading rules from directory /etc/polkit-1/rules.d Nov 6 00:29:10.507119 polkitd[1624]: Loading rules from directory /run/polkit-1/rules.d Nov 6 00:29:10.507204 polkitd[1624]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 00:29:10.507413 polkitd[1624]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 6 00:29:10.507455 polkitd[1624]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 00:29:10.507496 polkitd[1624]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 6 00:29:10.508068 polkitd[1624]: Finished loading, compiling and executing 2 rules Nov 6 00:29:10.508356 systemd[1]: Started polkit.service - Authorization Manager. Nov 6 00:29:10.510253 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 6 00:29:10.510578 polkitd[1624]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 6 00:29:10.519474 systemd-resolved[1451]: System hostname changed to '172-232-1-216'. Nov 6 00:29:10.519557 systemd-hostnamed[1606]: Hostname set to <172-232-1-216> (transient) Nov 6 00:29:10.604397 coreos-metadata[1514]: Nov 06 00:29:10.604 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 6 00:29:10.713251 coreos-metadata[1514]: Nov 06 00:29:10.713 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 6 00:29:10.897612 coreos-metadata[1514]: Nov 06 00:29:10.897 INFO Fetch successful Nov 6 00:29:10.897612 coreos-metadata[1514]: Nov 06 00:29:10.897 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 6 00:29:10.956414 coreos-metadata[1585]: Nov 06 00:29:10.956 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 6 00:29:11.046354 coreos-metadata[1585]: Nov 06 00:29:11.046 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 6 00:29:11.153118 coreos-metadata[1514]: Nov 06 00:29:11.152 INFO Fetch successful Nov 6 00:29:11.179375 coreos-metadata[1585]: Nov 06 00:29:11.179 INFO Fetch successful Nov 6 00:29:11.200770 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:29:11.202594 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 00:29:11.207747 systemd[1]: Finished sshkeys.service. Nov 6 00:29:11.257811 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:29:11.259103 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:29:11.384317 systemd-networkd[1449]: eth0: Gained IPv6LL Nov 6 00:29:11.387178 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:29:11.388677 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:29:11.392397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:11.395330 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:29:11.426420 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:29:12.258735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:12.260568 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:29:12.262251 systemd[1]: Startup finished in 3.000s (kernel) + 8.496s (initrd) + 5.974s (userspace) = 17.471s. Nov 6 00:29:12.266739 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:29:12.741317 kubelet[1692]: E1106 00:29:12.741198 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:29:12.745048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:29:12.745298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:29:12.745921 systemd[1]: kubelet.service: Consumed 828ms CPU time, 257.7M memory peak. Nov 6 00:29:13.767362 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:29:13.768657 systemd[1]: Started sshd@0-172.232.1.216:22-139.178.89.65:54050.service - OpenSSH per-connection server daemon (139.178.89.65:54050). Nov 6 00:29:14.141968 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 54050 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:29:14.144943 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:14.154106 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:29:14.155629 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:29:14.165304 systemd-logind[1532]: New session 1 of user core. Nov 6 00:29:14.174272 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:29:14.177704 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:29:14.189829 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:29:14.193418 systemd-logind[1532]: New session c1 of user core. Nov 6 00:29:14.327767 systemd[1709]: Queued start job for default target default.target. Nov 6 00:29:14.339362 systemd[1709]: Created slice app.slice - User Application Slice. Nov 6 00:29:14.339395 systemd[1709]: Reached target paths.target - Paths. Nov 6 00:29:14.339443 systemd[1709]: Reached target timers.target - Timers. Nov 6 00:29:14.341279 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:29:14.352536 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:29:14.352604 systemd[1709]: Reached target sockets.target - Sockets. Nov 6 00:29:14.352680 systemd[1709]: Reached target basic.target - Basic System. Nov 6 00:29:14.352752 systemd[1709]: Reached target default.target - Main User Target. Nov 6 00:29:14.352808 systemd[1709]: Startup finished in 153ms. Nov 6 00:29:14.353076 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:29:14.367335 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:29:14.634752 systemd[1]: Started sshd@1-172.232.1.216:22-139.178.89.65:54066.service - OpenSSH per-connection server daemon (139.178.89.65:54066). Nov 6 00:29:15.003626 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 54066 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:29:15.005423 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:15.011385 systemd-logind[1532]: New session 2 of user core. Nov 6 00:29:15.017303 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:29:15.261334 sshd[1723]: Connection closed by 139.178.89.65 port 54066 Nov 6 00:29:15.262373 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:15.266836 systemd[1]: sshd@1-172.232.1.216:22-139.178.89.65:54066.service: Deactivated successfully. Nov 6 00:29:15.268725 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:29:15.270129 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:29:15.271556 systemd-logind[1532]: Removed session 2. Nov 6 00:29:15.318070 systemd[1]: Started sshd@2-172.232.1.216:22-139.178.89.65:54078.service - OpenSSH per-connection server daemon (139.178.89.65:54078). Nov 6 00:29:15.664202 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 54078 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:29:15.665717 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:15.670871 systemd-logind[1532]: New session 3 of user core. Nov 6 00:29:15.674288 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:29:15.906284 sshd[1732]: Connection closed by 139.178.89.65 port 54078 Nov 6 00:29:15.906859 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:15.911373 systemd[1]: sshd@2-172.232.1.216:22-139.178.89.65:54078.service: Deactivated successfully. Nov 6 00:29:15.913561 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:29:15.914334 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:29:15.915952 systemd-logind[1532]: Removed session 3. Nov 6 00:29:15.981504 systemd[1]: Started sshd@3-172.232.1.216:22-139.178.89.65:54082.service - OpenSSH per-connection server daemon (139.178.89.65:54082). Nov 6 00:29:16.349947 sshd[1738]: Accepted publickey for core from 139.178.89.65 port 54082 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:29:16.351785 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:16.357038 systemd-logind[1532]: New session 4 of user core. Nov 6 00:29:16.363279 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:29:16.612597 sshd[1741]: Connection closed by 139.178.89.65 port 54082 Nov 6 00:29:16.613910 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:16.620292 systemd[1]: sshd@3-172.232.1.216:22-139.178.89.65:54082.service: Deactivated successfully. Nov 6 00:29:16.622966 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:29:16.625041 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:29:16.627695 systemd-logind[1532]: Removed session 4. Nov 6 00:29:16.681101 systemd[1]: Started sshd@4-172.232.1.216:22-139.178.89.65:45774.service - OpenSSH per-connection server daemon (139.178.89.65:45774). Nov 6 00:29:17.060334 sshd[1747]: Accepted publickey for core from 139.178.89.65 port 45774 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:29:17.063421 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:17.070647 systemd-logind[1532]: New session 5 of user core. Nov 6 00:29:17.085427 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:29:17.272890 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:29:17.273258 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:29:17.289054 sudo[1751]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:17.342452 sshd[1750]: Connection closed by 139.178.89.65 port 45774 Nov 6 00:29:17.343210 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:17.348988 systemd[1]: sshd@4-172.232.1.216:22-139.178.89.65:45774.service: Deactivated successfully. Nov 6 00:29:17.351347 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:29:17.352631 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:29:17.354381 systemd-logind[1532]: Removed session 5. Nov 6 00:29:17.404988 systemd[1]: Started sshd@5-172.232.1.216:22-139.178.89.65:45776.service - OpenSSH per-connection server daemon (139.178.89.65:45776). Nov 6 00:29:17.752459 sshd[1757]: Accepted publickey for core from 139.178.89.65 port 45776 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:29:17.754083 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:17.759726 systemd-logind[1532]: New session 6 of user core. Nov 6 00:29:17.764280 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:29:17.952509 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:29:17.952866 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:29:17.957993 sudo[1762]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:17.963901 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:29:17.964253 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:29:17.974179 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:29:18.018655 augenrules[1784]: No rules Nov 6 00:29:18.019323 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:29:18.019597 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:29:18.020915 sudo[1761]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:18.072298 sshd[1760]: Connection closed by 139.178.89.65 port 45776 Nov 6 00:29:18.073130 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:18.077265 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:29:18.077915 systemd[1]: sshd@5-172.232.1.216:22-139.178.89.65:45776.service: Deactivated successfully. Nov 6 00:29:18.079773 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:29:18.081659 systemd-logind[1532]: Removed session 6. Nov 6 00:29:18.140090 systemd[1]: Started sshd@6-172.232.1.216:22-139.178.89.65:45782.service - OpenSSH per-connection server daemon (139.178.89.65:45782). Nov 6 00:29:18.506516 sshd[1793]: Accepted publickey for core from 139.178.89.65 port 45782 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:29:18.508279 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:18.513755 systemd-logind[1532]: New session 7 of user core. Nov 6 00:29:18.521281 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:29:18.716043 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:29:18.716409 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:29:19.056654 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:29:19.065534 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:29:19.315195 dockerd[1815]: time="2025-11-06T00:29:19.313758777Z" level=info msg="Starting up" Nov 6 00:29:19.317550 dockerd[1815]: time="2025-11-06T00:29:19.317528809Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:29:19.333522 dockerd[1815]: time="2025-11-06T00:29:19.333467336Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:29:19.361408 systemd[1]: var-lib-docker-metacopy\x2dcheck3731499189-merged.mount: Deactivated successfully. Nov 6 00:29:19.385427 dockerd[1815]: time="2025-11-06T00:29:19.385398102Z" level=info msg="Loading containers: start." Nov 6 00:29:19.398189 kernel: Initializing XFRM netlink socket Nov 6 00:29:19.714973 systemd-networkd[1449]: docker0: Link UP Nov 6 00:29:19.719692 dockerd[1815]: time="2025-11-06T00:29:19.719632299Z" level=info msg="Loading containers: done." Nov 6 00:29:19.737006 dockerd[1815]: time="2025-11-06T00:29:19.736936418Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:29:19.737255 dockerd[1815]: time="2025-11-06T00:29:19.737050318Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:29:19.737255 dockerd[1815]: time="2025-11-06T00:29:19.737138968Z" level=info msg="Initializing buildkit" Nov 6 00:29:19.762875 dockerd[1815]: time="2025-11-06T00:29:19.762374181Z" level=info msg="Completed buildkit initialization" Nov 6 00:29:19.772860 dockerd[1815]: time="2025-11-06T00:29:19.772831976Z" level=info msg="Daemon has completed initialization" Nov 6 00:29:19.773091 dockerd[1815]: time="2025-11-06T00:29:19.773010406Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:29:19.775011 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:29:20.441944 containerd[1556]: time="2025-11-06T00:29:20.441896550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 00:29:21.289612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257140190.mount: Deactivated successfully. Nov 6 00:29:22.468059 containerd[1556]: time="2025-11-06T00:29:22.467987123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:22.468917 containerd[1556]: time="2025-11-06T00:29:22.468866103Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 6 00:29:22.471103 containerd[1556]: time="2025-11-06T00:29:22.469431043Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:22.471805 containerd[1556]: time="2025-11-06T00:29:22.471777525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:22.472857 containerd[1556]: time="2025-11-06T00:29:22.472814625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.030877925s" Nov 6 00:29:22.472911 containerd[1556]: time="2025-11-06T00:29:22.472864465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 00:29:22.473887 containerd[1556]: time="2025-11-06T00:29:22.473857306Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 00:29:22.995090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:29:22.999290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:23.192319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:23.200564 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:29:23.244330 kubelet[2093]: E1106 00:29:23.244258 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:29:23.248974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:29:23.249214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:29:23.249698 systemd[1]: kubelet.service: Consumed 200ms CPU time, 109.8M memory peak. Nov 6 00:29:23.911108 containerd[1556]: time="2025-11-06T00:29:23.911030164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:23.912230 containerd[1556]: time="2025-11-06T00:29:23.912002174Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 6 00:29:23.912813 containerd[1556]: time="2025-11-06T00:29:23.912777485Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:23.915123 containerd[1556]: time="2025-11-06T00:29:23.915087776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:23.916061 containerd[1556]: time="2025-11-06T00:29:23.916039716Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.44215166s" Nov 6 00:29:23.916137 containerd[1556]: time="2025-11-06T00:29:23.916123446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 00:29:23.917414 containerd[1556]: time="2025-11-06T00:29:23.917394987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 00:29:25.083251 containerd[1556]: time="2025-11-06T00:29:25.082320319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:25.083251 containerd[1556]: time="2025-11-06T00:29:25.083220519Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 6 00:29:25.084021 containerd[1556]: time="2025-11-06T00:29:25.083808490Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:25.086895 containerd[1556]: time="2025-11-06T00:29:25.086872591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:25.087916 containerd[1556]: time="2025-11-06T00:29:25.087895102Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.170404125s" Nov 6 00:29:25.087991 containerd[1556]: time="2025-11-06T00:29:25.087977852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 00:29:25.089197 containerd[1556]: time="2025-11-06T00:29:25.089113872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 00:29:26.306496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600967393.mount: Deactivated successfully. Nov 6 00:29:26.606724 containerd[1556]: time="2025-11-06T00:29:26.606450950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:26.610668 containerd[1556]: time="2025-11-06T00:29:26.610632183Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 6 00:29:26.610734 containerd[1556]: time="2025-11-06T00:29:26.610711633Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:26.615927 containerd[1556]: time="2025-11-06T00:29:26.613248154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:26.615927 containerd[1556]: time="2025-11-06T00:29:26.614064744Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.524869822s" Nov 6 00:29:26.615927 containerd[1556]: time="2025-11-06T00:29:26.614090464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 00:29:26.616749 containerd[1556]: time="2025-11-06T00:29:26.616707996Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 00:29:27.268344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866803902.mount: Deactivated successfully. Nov 6 00:29:28.133611 containerd[1556]: time="2025-11-06T00:29:28.133542503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:28.135169 containerd[1556]: time="2025-11-06T00:29:28.135114464Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 6 00:29:28.135472 containerd[1556]: time="2025-11-06T00:29:28.135451274Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:28.138027 containerd[1556]: time="2025-11-06T00:29:28.137990986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:28.139331 containerd[1556]: time="2025-11-06T00:29:28.139310656Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.52248333s" Nov 6 00:29:28.139404 containerd[1556]: time="2025-11-06T00:29:28.139390906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 00:29:28.141316 containerd[1556]: time="2025-11-06T00:29:28.141229387Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 00:29:28.767694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166149810.mount: Deactivated successfully. Nov 6 00:29:28.773267 containerd[1556]: time="2025-11-06T00:29:28.773232993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:28.774030 containerd[1556]: time="2025-11-06T00:29:28.773950203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 6 00:29:28.775210 containerd[1556]: time="2025-11-06T00:29:28.774706884Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:28.776526 containerd[1556]: time="2025-11-06T00:29:28.776486955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:28.777449 containerd[1556]: time="2025-11-06T00:29:28.777409635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 636.121558ms" Nov 6 00:29:28.777449 containerd[1556]: time="2025-11-06T00:29:28.777446085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 00:29:28.778688 containerd[1556]: time="2025-11-06T00:29:28.778618196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 00:29:31.586437 containerd[1556]: time="2025-11-06T00:29:31.586365259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:31.587638 containerd[1556]: time="2025-11-06T00:29:31.587602029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 6 00:29:31.588407 containerd[1556]: time="2025-11-06T00:29:31.588366490Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:31.590666 containerd[1556]: time="2025-11-06T00:29:31.590617101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:31.593668 containerd[1556]: time="2025-11-06T00:29:31.593575292Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.814919456s" Nov 6 00:29:31.593668 containerd[1556]: time="2025-11-06T00:29:31.593610532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 00:29:33.495950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:29:33.498457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:33.701785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:33.711513 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:29:33.757433 kubelet[2243]: E1106 00:29:33.757278 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:29:33.762663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:29:33.763074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:29:33.764320 systemd[1]: kubelet.service: Consumed 197ms CPU time, 110M memory peak. Nov 6 00:29:34.467668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:34.468048 systemd[1]: kubelet.service: Consumed 197ms CPU time, 110M memory peak. Nov 6 00:29:34.471406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:34.504374 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Nov 6 00:29:34.504391 systemd[1]: Reloading... Nov 6 00:29:34.659183 zram_generator::config[2303]: No configuration found. Nov 6 00:29:34.901515 systemd[1]: Reloading finished in 396 ms. Nov 6 00:29:34.977573 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:29:34.977691 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:29:34.978091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:34.978169 systemd[1]: kubelet.service: Consumed 151ms CPU time, 98.1M memory peak. Nov 6 00:29:34.980012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:35.164684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:35.173467 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:29:35.223545 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:29:35.223980 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:29:35.224091 kubelet[2355]: I1106 00:29:35.224069 2355 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:29:35.761853 kubelet[2355]: I1106 00:29:35.761792 2355 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:29:35.761853 kubelet[2355]: I1106 00:29:35.761823 2355 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:29:35.761853 kubelet[2355]: I1106 00:29:35.761853 2355 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:29:35.761853 kubelet[2355]: I1106 00:29:35.761861 2355 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:29:35.762445 kubelet[2355]: I1106 00:29:35.762349 2355 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:29:35.768106 kubelet[2355]: I1106 00:29:35.767610 2355 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:29:35.768106 kubelet[2355]: E1106 00:29:35.767735 2355 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.1.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:29:35.774713 kubelet[2355]: I1106 00:29:35.774679 2355 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:29:35.779953 kubelet[2355]: I1106 00:29:35.779917 2355 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:29:35.780203 kubelet[2355]: I1106 00:29:35.780135 2355 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:29:35.780346 kubelet[2355]: I1106 00:29:35.780200 2355 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-1-216","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:29:35.780475 kubelet[2355]: I1106 00:29:35.780350 2355 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:29:35.780475 kubelet[2355]: I1106 00:29:35.780363 2355 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:29:35.780475 kubelet[2355]: I1106 00:29:35.780443 2355 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:29:35.782511 kubelet[2355]: I1106 00:29:35.782482 2355 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:29:35.782768 kubelet[2355]: I1106 00:29:35.782737 2355 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:29:35.783178 kubelet[2355]: I1106 00:29:35.782769 2355 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:29:35.783178 kubelet[2355]: I1106 00:29:35.783120 2355 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:29:35.783178 kubelet[2355]: I1106 00:29:35.783139 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:29:35.783438 kubelet[2355]: E1106 00:29:35.783408 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.1.216:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-1-216&limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:29:35.786049 kubelet[2355]: I1106 00:29:35.786015 2355 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:29:35.786466 kubelet[2355]: I1106 00:29:35.786435 2355 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:29:35.786516 kubelet[2355]: I1106 00:29:35.786472 2355 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:29:35.786545 kubelet[2355]: W1106 00:29:35.786527 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:29:35.789494 kubelet[2355]: E1106 00:29:35.789467 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.1.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:29:35.790784 kubelet[2355]: I1106 00:29:35.790748 2355 server.go:1262] "Started kubelet" Nov 6 00:29:35.791006 kubelet[2355]: I1106 00:29:35.790962 2355 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:29:35.792761 kubelet[2355]: I1106 00:29:35.792721 2355 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:29:35.796501 kubelet[2355]: I1106 00:29:35.796435 2355 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:29:35.796501 kubelet[2355]: I1106 00:29:35.796495 2355 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:29:35.796963 kubelet[2355]: I1106 00:29:35.796744 2355 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:29:35.798747 kubelet[2355]: E1106 00:29:35.797052 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.1.216:6443/api/v1/namespaces/default/events\": dial tcp 172.232.1.216:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-1-216.1875436f9a12120f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-1-216,UID:172-232-1-216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-1-216,},FirstTimestamp:2025-11-06 00:29:35.790723599 +0000 UTC m=+0.612038096,LastTimestamp:2025-11-06 00:29:35.790723599 +0000 UTC m=+0.612038096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-1-216,}" Nov 6 00:29:35.799766 kubelet[2355]: I1106 00:29:35.799747 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:29:35.801326 kubelet[2355]: I1106 00:29:35.801291 2355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:29:35.805395 kubelet[2355]: E1106 00:29:35.805358 2355 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:29:35.806472 kubelet[2355]: E1106 00:29:35.806435 2355 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:35.806530 kubelet[2355]: I1106 00:29:35.806479 2355 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:29:35.806639 kubelet[2355]: I1106 00:29:35.806609 2355 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:29:35.806688 kubelet[2355]: I1106 00:29:35.806665 2355 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:29:35.808197 kubelet[2355]: I1106 00:29:35.807555 2355 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:29:35.808197 kubelet[2355]: I1106 00:29:35.807625 2355 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:29:35.808197 kubelet[2355]: E1106 00:29:35.808050 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.1.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:29:35.808968 kubelet[2355]: I1106 00:29:35.808939 2355 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:29:35.810345 kubelet[2355]: I1106 00:29:35.810299 2355 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:29:35.816677 kubelet[2355]: E1106 00:29:35.816577 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.1.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-1-216?timeout=10s\": dial tcp 172.232.1.216:6443: connect: connection refused" interval="200ms" Nov 6 00:29:35.836132 kubelet[2355]: I1106 00:29:35.836095 2355 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:29:35.836247 kubelet[2355]: I1106 00:29:35.836203 2355 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:29:35.836247 kubelet[2355]: I1106 00:29:35.836221 2355 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:29:35.838627 kubelet[2355]: I1106 00:29:35.838248 2355 policy_none.go:49] "None policy: Start" Nov 6 00:29:35.838627 kubelet[2355]: I1106 00:29:35.838268 2355 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:29:35.838627 kubelet[2355]: I1106 00:29:35.838280 2355 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:29:35.839739 kubelet[2355]: I1106 00:29:35.839709 2355 policy_none.go:47] "Start" Nov 6 00:29:35.846366 kubelet[2355]: I1106 00:29:35.846342 2355 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:29:35.846837 kubelet[2355]: I1106 00:29:35.846613 2355 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:29:35.846837 kubelet[2355]: I1106 00:29:35.846642 2355 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:29:35.846837 kubelet[2355]: E1106 00:29:35.846691 2355 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:29:35.848137 kubelet[2355]: E1106 00:29:35.848113 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.1.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:29:35.852798 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:29:35.863341 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:29:35.895330 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:29:35.897892 kubelet[2355]: E1106 00:29:35.897863 2355 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:29:35.898838 kubelet[2355]: I1106 00:29:35.898820 2355 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:29:35.899229 kubelet[2355]: I1106 00:29:35.899138 2355 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:29:35.900400 kubelet[2355]: I1106 00:29:35.900003 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:29:35.901500 kubelet[2355]: E1106 00:29:35.901462 2355 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:29:35.901715 kubelet[2355]: E1106 00:29:35.901514 2355 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-1-216\" not found" Nov 6 00:29:35.961293 systemd[1]: Created slice kubepods-burstable-podb63da0b27da6e1e22962748d73612bb3.slice - libcontainer container kubepods-burstable-podb63da0b27da6e1e22962748d73612bb3.slice. Nov 6 00:29:35.985345 kubelet[2355]: E1106 00:29:35.985226 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:35.989260 systemd[1]: Created slice kubepods-burstable-pod2bfa4c2ec368c14f76757a6d88a406cb.slice - libcontainer container kubepods-burstable-pod2bfa4c2ec368c14f76757a6d88a406cb.slice. Nov 6 00:29:35.992342 kubelet[2355]: E1106 00:29:35.992317 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:35.996523 systemd[1]: Created slice kubepods-burstable-pod5347afb84fe2fc9376f9de57999e4f3a.slice - libcontainer container kubepods-burstable-pod5347afb84fe2fc9376f9de57999e4f3a.slice. Nov 6 00:29:35.998836 kubelet[2355]: E1106 00:29:35.998679 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:36.001978 kubelet[2355]: I1106 00:29:36.001955 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-1-216" Nov 6 00:29:36.002664 kubelet[2355]: E1106 00:29:36.002639 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.1.216:6443/api/v1/nodes\": dial tcp 172.232.1.216:6443: connect: connection refused" node="172-232-1-216" Nov 6 00:29:36.007868 kubelet[2355]: I1106 00:29:36.007820 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-k8s-certs\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:36.007868 kubelet[2355]: I1106 00:29:36.007848 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5347afb84fe2fc9376f9de57999e4f3a-ca-certs\") pod \"kube-apiserver-172-232-1-216\" (UID: \"5347afb84fe2fc9376f9de57999e4f3a\") " pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:36.007868 kubelet[2355]: I1106 00:29:36.007865 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-ca-certs\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:36.008428 kubelet[2355]: I1106 00:29:36.007880 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-kubeconfig\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:36.008428 kubelet[2355]: I1106 00:29:36.007895 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:36.008428 kubelet[2355]: I1106 00:29:36.007912 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bfa4c2ec368c14f76757a6d88a406cb-kubeconfig\") pod \"kube-scheduler-172-232-1-216\" (UID: \"2bfa4c2ec368c14f76757a6d88a406cb\") " pod="kube-system/kube-scheduler-172-232-1-216" Nov 6 00:29:36.008428 kubelet[2355]: I1106 00:29:36.007927 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5347afb84fe2fc9376f9de57999e4f3a-k8s-certs\") pod \"kube-apiserver-172-232-1-216\" (UID: \"5347afb84fe2fc9376f9de57999e4f3a\") " pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:36.008428 kubelet[2355]: I1106 00:29:36.008169 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5347afb84fe2fc9376f9de57999e4f3a-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-1-216\" (UID: \"5347afb84fe2fc9376f9de57999e4f3a\") " pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:36.008536 kubelet[2355]: I1106 00:29:36.008192 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-flexvolume-dir\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:36.017662 kubelet[2355]: E1106 00:29:36.017528 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.1.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-1-216?timeout=10s\": dial tcp 172.232.1.216:6443: connect: connection refused" interval="400ms" Nov 6 00:29:36.204551 kubelet[2355]: I1106 00:29:36.204507 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-1-216" Nov 6 00:29:36.204752 kubelet[2355]: E1106 00:29:36.204698 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.1.216:6443/api/v1/nodes\": dial tcp 172.232.1.216:6443: connect: connection refused" node="172-232-1-216" Nov 6 00:29:36.288655 kubelet[2355]: E1106 00:29:36.288590 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:36.289716 containerd[1556]: time="2025-11-06T00:29:36.289668519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-1-216,Uid:b63da0b27da6e1e22962748d73612bb3,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:36.294847 kubelet[2355]: E1106 00:29:36.294796 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:36.295750 containerd[1556]: time="2025-11-06T00:29:36.295602492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-1-216,Uid:2bfa4c2ec368c14f76757a6d88a406cb,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:36.300310 kubelet[2355]: E1106 00:29:36.300280 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:36.301033 containerd[1556]: time="2025-11-06T00:29:36.300778554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-1-216,Uid:5347afb84fe2fc9376f9de57999e4f3a,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:36.418472 kubelet[2355]: E1106 00:29:36.418403 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.1.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-1-216?timeout=10s\": dial tcp 172.232.1.216:6443: connect: connection refused" interval="800ms" Nov 6 00:29:36.608031 kubelet[2355]: I1106 00:29:36.607524 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-1-216" Nov 6 00:29:36.608405 kubelet[2355]: E1106 00:29:36.608253 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.1.216:6443/api/v1/nodes\": dial tcp 172.232.1.216:6443: connect: connection refused" node="172-232-1-216" Nov 6 00:29:36.916577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576025213.mount: Deactivated successfully. Nov 6 00:29:36.922890 containerd[1556]: time="2025-11-06T00:29:36.922840775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:29:36.923523 containerd[1556]: time="2025-11-06T00:29:36.923490865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:29:36.924539 containerd[1556]: time="2025-11-06T00:29:36.924477876Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:29:36.926271 containerd[1556]: time="2025-11-06T00:29:36.926219607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:29:36.926816 containerd[1556]: time="2025-11-06T00:29:36.926714027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:29:36.927290 containerd[1556]: time="2025-11-06T00:29:36.927268697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:29:36.928062 containerd[1556]: time="2025-11-06T00:29:36.928043568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:29:36.928760 containerd[1556]: time="2025-11-06T00:29:36.928619788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:29:36.929488 containerd[1556]: time="2025-11-06T00:29:36.929444398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.765573ms" Nov 6 00:29:36.930735 containerd[1556]: time="2025-11-06T00:29:36.930711019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.357246ms" Nov 6 00:29:36.941173 kubelet[2355]: E1106 00:29:36.940195 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.1.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:29:36.951611 kubelet[2355]: E1106 00:29:36.951566 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.1.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:29:36.959500 containerd[1556]: time="2025-11-06T00:29:36.959475183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.925753ms" Nov 6 00:29:36.961123 containerd[1556]: time="2025-11-06T00:29:36.961100394Z" level=info msg="connecting to shim b2548f72233eaf877e48aee2e12dec571d3eb59d6226a6296a79ae15b39030bf" address="unix:///run/containerd/s/cce46e7bf9b07100455550d148de265e1c2250fdeb0e1aa3f3e32175faedcb29" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:36.961827 containerd[1556]: time="2025-11-06T00:29:36.961802245Z" level=info msg="connecting to shim a16652c0ebe2c29bedf3a896373b5a1e832d4a2a196aec69133da8fdb3f9d853" address="unix:///run/containerd/s/cb55e86c7de3d7d209787b7c37fc29ef58bd48dca301da5ba97309448b1690d6" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:36.991479 containerd[1556]: time="2025-11-06T00:29:36.991433439Z" level=info msg="connecting to shim d5dcafd6526d24fb906ae118c111c26d077f81b0e7b3f0a415f52a1cbcce3677" address="unix:///run/containerd/s/e0c54a8aa82786223a2a475bec8dae90b93a4a85a422857a338ef6c0d261b5a9" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:37.007346 systemd[1]: Started cri-containerd-b2548f72233eaf877e48aee2e12dec571d3eb59d6226a6296a79ae15b39030bf.scope - libcontainer container b2548f72233eaf877e48aee2e12dec571d3eb59d6226a6296a79ae15b39030bf. Nov 6 00:29:37.019316 systemd[1]: Started cri-containerd-a16652c0ebe2c29bedf3a896373b5a1e832d4a2a196aec69133da8fdb3f9d853.scope - libcontainer container a16652c0ebe2c29bedf3a896373b5a1e832d4a2a196aec69133da8fdb3f9d853. Nov 6 00:29:37.030529 systemd[1]: Started cri-containerd-d5dcafd6526d24fb906ae118c111c26d077f81b0e7b3f0a415f52a1cbcce3677.scope - libcontainer container d5dcafd6526d24fb906ae118c111c26d077f81b0e7b3f0a415f52a1cbcce3677. Nov 6 00:29:37.151541 containerd[1556]: time="2025-11-06T00:29:37.151504099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-1-216,Uid:2bfa4c2ec368c14f76757a6d88a406cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a16652c0ebe2c29bedf3a896373b5a1e832d4a2a196aec69133da8fdb3f9d853\"" Nov 6 00:29:37.153679 containerd[1556]: time="2025-11-06T00:29:37.153655070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-1-216,Uid:5347afb84fe2fc9376f9de57999e4f3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2548f72233eaf877e48aee2e12dec571d3eb59d6226a6296a79ae15b39030bf\"" Nov 6 00:29:37.155579 kubelet[2355]: E1106 00:29:37.155504 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:37.156446 kubelet[2355]: E1106 00:29:37.156431 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:37.165452 containerd[1556]: time="2025-11-06T00:29:37.165382896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-1-216,Uid:b63da0b27da6e1e22962748d73612bb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5dcafd6526d24fb906ae118c111c26d077f81b0e7b3f0a415f52a1cbcce3677\"" Nov 6 00:29:37.165825 containerd[1556]: time="2025-11-06T00:29:37.165742316Z" level=info msg="CreateContainer within sandbox \"a16652c0ebe2c29bedf3a896373b5a1e832d4a2a196aec69133da8fdb3f9d853\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:29:37.167320 containerd[1556]: time="2025-11-06T00:29:37.166343547Z" level=info msg="CreateContainer within sandbox \"b2548f72233eaf877e48aee2e12dec571d3eb59d6226a6296a79ae15b39030bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:29:37.169386 kubelet[2355]: E1106 00:29:37.169371 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:37.174746 containerd[1556]: time="2025-11-06T00:29:37.174690111Z" level=info msg="CreateContainer within sandbox \"d5dcafd6526d24fb906ae118c111c26d077f81b0e7b3f0a415f52a1cbcce3677\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:29:37.183880 containerd[1556]: time="2025-11-06T00:29:37.183838856Z" level=info msg="Container f53e38ef6ddc985ac2aa7a4a83e09aef0da4fab321a540e6cce1db788ea67e1e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:37.191377 containerd[1556]: time="2025-11-06T00:29:37.191309029Z" level=info msg="Container b24f57e91da8087ccd086e3dde1005553e2726ef23820f742ee23afad7157657: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:37.196768 containerd[1556]: time="2025-11-06T00:29:37.196723562Z" level=info msg="Container ec5d8f74143c416f0e55f6549f7fb25e6d7228a8c8c0740d4cbdbd1389e158d9: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:37.199128 containerd[1556]: time="2025-11-06T00:29:37.199007153Z" level=info msg="CreateContainer within sandbox \"b2548f72233eaf877e48aee2e12dec571d3eb59d6226a6296a79ae15b39030bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f53e38ef6ddc985ac2aa7a4a83e09aef0da4fab321a540e6cce1db788ea67e1e\"" Nov 6 00:29:37.199979 containerd[1556]: time="2025-11-06T00:29:37.199955864Z" level=info msg="StartContainer for \"f53e38ef6ddc985ac2aa7a4a83e09aef0da4fab321a540e6cce1db788ea67e1e\"" Nov 6 00:29:37.202805 containerd[1556]: time="2025-11-06T00:29:37.202780005Z" level=info msg="connecting to shim f53e38ef6ddc985ac2aa7a4a83e09aef0da4fab321a540e6cce1db788ea67e1e" address="unix:///run/containerd/s/cce46e7bf9b07100455550d148de265e1c2250fdeb0e1aa3f3e32175faedcb29" protocol=ttrpc version=3 Nov 6 00:29:37.203470 containerd[1556]: time="2025-11-06T00:29:37.203419005Z" level=info msg="CreateContainer within sandbox \"a16652c0ebe2c29bedf3a896373b5a1e832d4a2a196aec69133da8fdb3f9d853\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b24f57e91da8087ccd086e3dde1005553e2726ef23820f742ee23afad7157657\"" Nov 6 00:29:37.204106 containerd[1556]: time="2025-11-06T00:29:37.204021996Z" level=info msg="StartContainer for \"b24f57e91da8087ccd086e3dde1005553e2726ef23820f742ee23afad7157657\"" Nov 6 00:29:37.206697 containerd[1556]: time="2025-11-06T00:29:37.206657667Z" level=info msg="connecting to shim b24f57e91da8087ccd086e3dde1005553e2726ef23820f742ee23afad7157657" address="unix:///run/containerd/s/cb55e86c7de3d7d209787b7c37fc29ef58bd48dca301da5ba97309448b1690d6" protocol=ttrpc version=3 Nov 6 00:29:37.209760 containerd[1556]: time="2025-11-06T00:29:37.209713688Z" level=info msg="CreateContainer within sandbox \"d5dcafd6526d24fb906ae118c111c26d077f81b0e7b3f0a415f52a1cbcce3677\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec5d8f74143c416f0e55f6549f7fb25e6d7228a8c8c0740d4cbdbd1389e158d9\"" Nov 6 00:29:37.210524 containerd[1556]: time="2025-11-06T00:29:37.210478479Z" level=info msg="StartContainer for \"ec5d8f74143c416f0e55f6549f7fb25e6d7228a8c8c0740d4cbdbd1389e158d9\"" Nov 6 00:29:37.230517 kubelet[2355]: E1106 00:29:37.230449 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.1.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-1-216?timeout=10s\": dial tcp 172.232.1.216:6443: connect: connection refused" interval="1.6s" Nov 6 00:29:37.231319 containerd[1556]: time="2025-11-06T00:29:37.231295149Z" level=info msg="connecting to shim ec5d8f74143c416f0e55f6549f7fb25e6d7228a8c8c0740d4cbdbd1389e158d9" address="unix:///run/containerd/s/e0c54a8aa82786223a2a475bec8dae90b93a4a85a422857a338ef6c0d261b5a9" protocol=ttrpc version=3 Nov 6 00:29:37.253347 systemd[1]: Started cri-containerd-b24f57e91da8087ccd086e3dde1005553e2726ef23820f742ee23afad7157657.scope - libcontainer container b24f57e91da8087ccd086e3dde1005553e2726ef23820f742ee23afad7157657. Nov 6 00:29:37.254916 systemd[1]: Started cri-containerd-f53e38ef6ddc985ac2aa7a4a83e09aef0da4fab321a540e6cce1db788ea67e1e.scope - libcontainer container f53e38ef6ddc985ac2aa7a4a83e09aef0da4fab321a540e6cce1db788ea67e1e. Nov 6 00:29:37.267035 systemd[1]: Started cri-containerd-ec5d8f74143c416f0e55f6549f7fb25e6d7228a8c8c0740d4cbdbd1389e158d9.scope - libcontainer container ec5d8f74143c416f0e55f6549f7fb25e6d7228a8c8c0740d4cbdbd1389e158d9. Nov 6 00:29:37.278404 kubelet[2355]: E1106 00:29:37.278311 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.1.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:29:37.330333 kubelet[2355]: E1106 00:29:37.330275 2355 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.1.216:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-1-216&limit=500&resourceVersion=0\": dial tcp 172.232.1.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:29:37.359536 containerd[1556]: time="2025-11-06T00:29:37.359461703Z" level=info msg="StartContainer for \"ec5d8f74143c416f0e55f6549f7fb25e6d7228a8c8c0740d4cbdbd1389e158d9\" returns successfully" Nov 6 00:29:37.387846 containerd[1556]: time="2025-11-06T00:29:37.387711987Z" level=info msg="StartContainer for \"f53e38ef6ddc985ac2aa7a4a83e09aef0da4fab321a540e6cce1db788ea67e1e\" returns successfully" Nov 6 00:29:37.414345 kubelet[2355]: I1106 00:29:37.413944 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-1-216" Nov 6 00:29:37.415078 kubelet[2355]: E1106 00:29:37.415050 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.1.216:6443/api/v1/nodes\": dial tcp 172.232.1.216:6443: connect: connection refused" node="172-232-1-216" Nov 6 00:29:37.428957 containerd[1556]: time="2025-11-06T00:29:37.428868098Z" level=info msg="StartContainer for \"b24f57e91da8087ccd086e3dde1005553e2726ef23820f742ee23afad7157657\" returns successfully" Nov 6 00:29:37.864241 kubelet[2355]: E1106 00:29:37.863797 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:37.864241 kubelet[2355]: E1106 00:29:37.863920 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:37.873556 kubelet[2355]: E1106 00:29:37.873538 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:37.877456 kubelet[2355]: E1106 00:29:37.877318 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:37.879821 kubelet[2355]: E1106 00:29:37.879807 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:37.880076 kubelet[2355]: E1106 00:29:37.880021 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:38.883587 kubelet[2355]: E1106 00:29:38.882846 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:38.883587 kubelet[2355]: E1106 00:29:38.882987 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:38.883587 kubelet[2355]: E1106 00:29:38.883217 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:38.883587 kubelet[2355]: E1106 00:29:38.883298 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:38.883587 kubelet[2355]: E1106 00:29:38.883462 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:38.883587 kubelet[2355]: E1106 00:29:38.883541 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:39.018987 kubelet[2355]: I1106 00:29:39.018955 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-1-216" Nov 6 00:29:39.604229 kubelet[2355]: E1106 00:29:39.604182 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:39.708322 kubelet[2355]: I1106 00:29:39.708272 2355 kubelet_node_status.go:78] "Successfully registered node" node="172-232-1-216" Nov 6 00:29:39.708322 kubelet[2355]: E1106 00:29:39.708313 2355 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-232-1-216\": node \"172-232-1-216\" not found" Nov 6 00:29:39.729436 kubelet[2355]: E1106 00:29:39.729376 2355 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:39.830407 kubelet[2355]: E1106 00:29:39.830288 2355 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:39.884796 kubelet[2355]: E1106 00:29:39.884669 2355 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-1-216\" not found" node="172-232-1-216" Nov 6 00:29:39.885426 kubelet[2355]: E1106 00:29:39.885312 2355 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:39.930623 kubelet[2355]: E1106 00:29:39.930589 2355 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:40.030914 kubelet[2355]: E1106 00:29:40.030880 2355 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:40.131856 kubelet[2355]: E1106 00:29:40.131826 2355 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:40.232990 kubelet[2355]: E1106 00:29:40.232880 2355 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:40.313292 kubelet[2355]: I1106 00:29:40.313258 2355 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-1-216" Nov 6 00:29:40.317556 kubelet[2355]: E1106 00:29:40.317531 2355 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-1-216\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-1-216" Nov 6 00:29:40.317556 kubelet[2355]: I1106 00:29:40.317553 2355 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:40.318844 kubelet[2355]: E1106 00:29:40.318799 2355 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-1-216\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:40.318844 kubelet[2355]: I1106 00:29:40.318820 2355 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:40.320469 kubelet[2355]: E1106 00:29:40.320439 2355 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-1-216\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:40.555639 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 6 00:29:40.792850 kubelet[2355]: I1106 00:29:40.792761 2355 apiserver.go:52] "Watching apiserver" Nov 6 00:29:40.807649 kubelet[2355]: I1106 00:29:40.807442 2355 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:29:41.635993 systemd[1]: Reload requested from client PID 2647 ('systemctl') (unit session-7.scope)... Nov 6 00:29:41.636016 systemd[1]: Reloading... Nov 6 00:29:41.777172 zram_generator::config[2693]: No configuration found. Nov 6 00:29:42.018791 systemd[1]: Reloading finished in 382 ms. Nov 6 00:29:42.058798 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:42.072675 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:29:42.073371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:42.073457 systemd[1]: kubelet.service: Consumed 1.099s CPU time, 125.1M memory peak. Nov 6 00:29:42.076691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:42.291706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:42.303923 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:29:42.352076 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:29:42.352076 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:29:42.352666 kubelet[2741]: I1106 00:29:42.352097 2741 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:29:42.367714 kubelet[2741]: I1106 00:29:42.367668 2741 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:29:42.367714 kubelet[2741]: I1106 00:29:42.367694 2741 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:29:42.367714 kubelet[2741]: I1106 00:29:42.367722 2741 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:29:42.367714 kubelet[2741]: I1106 00:29:42.367729 2741 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:29:42.369344 kubelet[2741]: I1106 00:29:42.367975 2741 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:29:42.369344 kubelet[2741]: I1106 00:29:42.369285 2741 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:29:42.372735 kubelet[2741]: I1106 00:29:42.372327 2741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:29:42.381124 kubelet[2741]: I1106 00:29:42.381074 2741 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:29:42.386469 kubelet[2741]: I1106 00:29:42.386433 2741 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:29:42.386824 kubelet[2741]: I1106 00:29:42.386714 2741 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:29:42.386949 kubelet[2741]: I1106 00:29:42.386756 2741 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-1-216","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:29:42.386949 kubelet[2741]: I1106 00:29:42.386947 2741 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:29:42.387307 kubelet[2741]: I1106 00:29:42.386960 2741 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:29:42.387307 kubelet[2741]: I1106 00:29:42.387202 2741 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:29:42.388451 kubelet[2741]: I1106 00:29:42.388413 2741 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:29:42.388631 kubelet[2741]: I1106 00:29:42.388595 2741 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:29:42.388631 kubelet[2741]: I1106 00:29:42.388628 2741 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:29:42.388708 kubelet[2741]: I1106 00:29:42.388694 2741 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:29:42.388738 kubelet[2741]: I1106 00:29:42.388729 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:29:42.392304 kubelet[2741]: I1106 00:29:42.392273 2741 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:29:42.392820 kubelet[2741]: I1106 00:29:42.392750 2741 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:29:42.392820 kubelet[2741]: I1106 00:29:42.392787 2741 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:29:42.398947 kubelet[2741]: I1106 00:29:42.398763 2741 server.go:1262] "Started kubelet" Nov 6 00:29:42.402176 kubelet[2741]: I1106 00:29:42.401479 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:29:42.417924 kubelet[2741]: I1106 00:29:42.416437 2741 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:29:42.419724 kubelet[2741]: I1106 00:29:42.418911 2741 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:29:42.427547 kubelet[2741]: I1106 00:29:42.427509 2741 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:29:42.435540 kubelet[2741]: I1106 00:29:42.435478 2741 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:29:42.435774 kubelet[2741]: I1106 00:29:42.428856 2741 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:29:42.444620 kubelet[2741]: I1106 00:29:42.427957 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:29:42.444620 kubelet[2741]: I1106 00:29:42.428880 2741 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:29:42.444716 kubelet[2741]: I1106 00:29:42.444668 2741 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:29:42.446141 kubelet[2741]: E1106 00:29:42.430291 2741 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-1-216\" not found" Nov 6 00:29:42.447333 kubelet[2741]: I1106 00:29:42.446538 2741 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:29:42.450941 kubelet[2741]: I1106 00:29:42.450840 2741 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:29:42.450941 kubelet[2741]: I1106 00:29:42.450933 2741 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:29:42.454673 kubelet[2741]: I1106 00:29:42.454642 2741 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:29:42.457351 kubelet[2741]: I1106 00:29:42.457293 2741 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:29:42.461197 kubelet[2741]: I1106 00:29:42.461120 2741 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:29:42.461197 kubelet[2741]: I1106 00:29:42.461171 2741 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:29:42.461197 kubelet[2741]: I1106 00:29:42.461196 2741 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:29:42.461293 kubelet[2741]: E1106 00:29:42.461238 2741 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:29:42.461636 kubelet[2741]: E1106 00:29:42.461606 2741 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:29:42.512683 kubelet[2741]: I1106 00:29:42.512650 2741 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:29:42.512683 kubelet[2741]: I1106 00:29:42.512674 2741 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:29:42.512784 kubelet[2741]: I1106 00:29:42.512694 2741 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:29:42.512813 kubelet[2741]: I1106 00:29:42.512806 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:29:42.512839 kubelet[2741]: I1106 00:29:42.512817 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:29:42.512839 kubelet[2741]: I1106 00:29:42.512832 2741 policy_none.go:49] "None policy: Start" Nov 6 00:29:42.512881 kubelet[2741]: I1106 00:29:42.512842 2741 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:29:42.512881 kubelet[2741]: I1106 00:29:42.512853 2741 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:29:42.513287 kubelet[2741]: I1106 00:29:42.512933 2741 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 00:29:42.513287 kubelet[2741]: I1106 00:29:42.512947 2741 policy_none.go:47] "Start" Nov 6 00:29:42.522765 kubelet[2741]: E1106 00:29:42.522717 2741 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:29:42.522905 kubelet[2741]: I1106 00:29:42.522872 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:29:42.522967 kubelet[2741]: I1106 00:29:42.522894 2741 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:29:42.524651 kubelet[2741]: I1106 00:29:42.523520 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:29:42.527142 kubelet[2741]: E1106 00:29:42.526583 2741 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:29:42.562881 kubelet[2741]: I1106 00:29:42.561984 2741 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-1-216" Nov 6 00:29:42.564624 kubelet[2741]: I1106 00:29:42.564412 2741 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:42.564624 kubelet[2741]: I1106 00:29:42.564521 2741 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:42.626907 kubelet[2741]: I1106 00:29:42.626854 2741 kubelet_node_status.go:75] "Attempting to register node" node="172-232-1-216" Nov 6 00:29:42.637474 kubelet[2741]: I1106 00:29:42.637424 2741 kubelet_node_status.go:124] "Node was previously registered" node="172-232-1-216" Nov 6 00:29:42.637559 kubelet[2741]: I1106 00:29:42.637527 2741 kubelet_node_status.go:78] "Successfully registered node" node="172-232-1-216" Nov 6 00:29:42.646095 kubelet[2741]: I1106 00:29:42.644995 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5347afb84fe2fc9376f9de57999e4f3a-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-1-216\" (UID: \"5347afb84fe2fc9376f9de57999e4f3a\") " pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:42.646226 kubelet[2741]: I1106 00:29:42.646112 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-k8s-certs\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:42.646226 kubelet[2741]: I1106 00:29:42.646133 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-kubeconfig\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:42.646226 kubelet[2741]: I1106 00:29:42.646182 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:42.646226 kubelet[2741]: I1106 00:29:42.646204 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5347afb84fe2fc9376f9de57999e4f3a-k8s-certs\") pod \"kube-apiserver-172-232-1-216\" (UID: \"5347afb84fe2fc9376f9de57999e4f3a\") " pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:42.646226 kubelet[2741]: I1106 00:29:42.646218 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-ca-certs\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:42.646795 kubelet[2741]: I1106 00:29:42.646232 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b63da0b27da6e1e22962748d73612bb3-flexvolume-dir\") pod \"kube-controller-manager-172-232-1-216\" (UID: \"b63da0b27da6e1e22962748d73612bb3\") " pod="kube-system/kube-controller-manager-172-232-1-216" Nov 6 00:29:42.646795 kubelet[2741]: I1106 00:29:42.646248 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bfa4c2ec368c14f76757a6d88a406cb-kubeconfig\") pod \"kube-scheduler-172-232-1-216\" (UID: \"2bfa4c2ec368c14f76757a6d88a406cb\") " pod="kube-system/kube-scheduler-172-232-1-216" Nov 6 00:29:42.646795 kubelet[2741]: I1106 00:29:42.646260 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5347afb84fe2fc9376f9de57999e4f3a-ca-certs\") pod \"kube-apiserver-172-232-1-216\" (UID: \"5347afb84fe2fc9376f9de57999e4f3a\") " pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:42.869362 kubelet[2741]: E1106 00:29:42.868638 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:42.876279 kubelet[2741]: E1106 00:29:42.873837 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:42.876279 kubelet[2741]: E1106 00:29:42.875532 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:43.399776 kubelet[2741]: I1106 00:29:43.399709 2741 apiserver.go:52] "Watching apiserver" Nov 6 00:29:43.445069 kubelet[2741]: I1106 00:29:43.444988 2741 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:29:43.493753 kubelet[2741]: I1106 00:29:43.492457 2741 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:43.494823 kubelet[2741]: E1106 00:29:43.494713 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:43.498963 kubelet[2741]: E1106 00:29:43.498944 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:43.511650 kubelet[2741]: E1106 00:29:43.511629 2741 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-1-216\" already exists" pod="kube-system/kube-apiserver-172-232-1-216" Nov 6 00:29:43.512886 kubelet[2741]: E1106 00:29:43.512132 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:43.538089 kubelet[2741]: I1106 00:29:43.537920 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-1-216" podStartSLOduration=1.53790518 podStartE2EDuration="1.53790518s" podCreationTimestamp="2025-11-06 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:43.537477831 +0000 UTC m=+1.228084835" watchObservedRunningTime="2025-11-06 00:29:43.53790518 +0000 UTC m=+1.228512184" Nov 6 00:29:43.565095 kubelet[2741]: I1106 00:29:43.565050 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-1-216" podStartSLOduration=1.5650327960000001 podStartE2EDuration="1.565032796s" podCreationTimestamp="2025-11-06 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:43.551923757 +0000 UTC m=+1.242530771" watchObservedRunningTime="2025-11-06 00:29:43.565032796 +0000 UTC m=+1.255639800" Nov 6 00:29:44.495553 kubelet[2741]: E1106 00:29:44.495479 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:44.496550 kubelet[2741]: E1106 00:29:44.496485 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:45.498256 kubelet[2741]: E1106 00:29:45.498188 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:46.972189 kubelet[2741]: E1106 00:29:46.971976 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:48.604528 kubelet[2741]: I1106 00:29:48.604468 2741 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:29:48.606025 containerd[1556]: time="2025-11-06T00:29:48.605513177Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:29:48.606357 kubelet[2741]: I1106 00:29:48.605720 2741 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:29:49.580626 kubelet[2741]: I1106 00:29:49.579903 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-1-216" podStartSLOduration=7.579881644 podStartE2EDuration="7.579881644s" podCreationTimestamp="2025-11-06 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:43.566558134 +0000 UTC m=+1.257165158" watchObservedRunningTime="2025-11-06 00:29:49.579881644 +0000 UTC m=+7.270488648" Nov 6 00:29:49.601872 systemd[1]: Created slice kubepods-besteffort-pod9b2f805d_d24e_48f3_a5f2_670fcd486626.slice - libcontainer container kubepods-besteffort-pod9b2f805d_d24e_48f3_a5f2_670fcd486626.slice. Nov 6 00:29:49.693965 kubelet[2741]: I1106 00:29:49.693765 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b2f805d-d24e-48f3-a5f2-670fcd486626-lib-modules\") pod \"kube-proxy-j6lwq\" (UID: \"9b2f805d-d24e-48f3-a5f2-670fcd486626\") " pod="kube-system/kube-proxy-j6lwq" Nov 6 00:29:49.693965 kubelet[2741]: I1106 00:29:49.693813 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wmm7\" (UniqueName: \"kubernetes.io/projected/9b2f805d-d24e-48f3-a5f2-670fcd486626-kube-api-access-7wmm7\") pod \"kube-proxy-j6lwq\" (UID: \"9b2f805d-d24e-48f3-a5f2-670fcd486626\") " pod="kube-system/kube-proxy-j6lwq" Nov 6 00:29:49.693965 kubelet[2741]: I1106 00:29:49.693841 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b2f805d-d24e-48f3-a5f2-670fcd486626-kube-proxy\") pod \"kube-proxy-j6lwq\" (UID: \"9b2f805d-d24e-48f3-a5f2-670fcd486626\") " pod="kube-system/kube-proxy-j6lwq" Nov 6 00:29:49.693965 kubelet[2741]: I1106 00:29:49.693895 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b2f805d-d24e-48f3-a5f2-670fcd486626-xtables-lock\") pod \"kube-proxy-j6lwq\" (UID: \"9b2f805d-d24e-48f3-a5f2-670fcd486626\") " pod="kube-system/kube-proxy-j6lwq" Nov 6 00:29:49.747635 systemd[1]: Created slice kubepods-besteffort-podc8bfaca5_2504_4b60_ba9a_08da0b87d25b.slice - libcontainer container kubepods-besteffort-podc8bfaca5_2504_4b60_ba9a_08da0b87d25b.slice. Nov 6 00:29:49.794837 kubelet[2741]: I1106 00:29:49.794792 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8bfaca5-2504-4b60-ba9a-08da0b87d25b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-7x6rd\" (UID: \"c8bfaca5-2504-4b60-ba9a-08da0b87d25b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7x6rd" Nov 6 00:29:49.795298 kubelet[2741]: I1106 00:29:49.795020 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpjg\" (UniqueName: \"kubernetes.io/projected/c8bfaca5-2504-4b60-ba9a-08da0b87d25b-kube-api-access-4mpjg\") pod \"tigera-operator-65cdcdfd6d-7x6rd\" (UID: \"c8bfaca5-2504-4b60-ba9a-08da0b87d25b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7x6rd" Nov 6 00:29:49.917530 kubelet[2741]: E1106 00:29:49.915615 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:49.919525 containerd[1556]: time="2025-11-06T00:29:49.919445055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j6lwq,Uid:9b2f805d-d24e-48f3-a5f2-670fcd486626,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:49.940363 containerd[1556]: time="2025-11-06T00:29:49.940234691Z" level=info msg="connecting to shim 8ad0e2b034acb86612016294db0054901e45a46c7f9a2bcce9371347a4288cf2" address="unix:///run/containerd/s/1aa17e954ef638671744a81c663f7a8c727663365775e1bc9818f1c10869b8b7" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:49.973336 systemd[1]: Started cri-containerd-8ad0e2b034acb86612016294db0054901e45a46c7f9a2bcce9371347a4288cf2.scope - libcontainer container 8ad0e2b034acb86612016294db0054901e45a46c7f9a2bcce9371347a4288cf2. Nov 6 00:29:50.012956 containerd[1556]: time="2025-11-06T00:29:50.012914657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j6lwq,Uid:9b2f805d-d24e-48f3-a5f2-670fcd486626,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ad0e2b034acb86612016294db0054901e45a46c7f9a2bcce9371347a4288cf2\"" Nov 6 00:29:50.015583 kubelet[2741]: E1106 00:29:50.014809 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:50.024504 containerd[1556]: time="2025-11-06T00:29:50.024096666Z" level=info msg="CreateContainer within sandbox \"8ad0e2b034acb86612016294db0054901e45a46c7f9a2bcce9371347a4288cf2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:29:50.040356 containerd[1556]: time="2025-11-06T00:29:50.040317576Z" level=info msg="Container 51ffb06e13c81f8d5485383bdc65c46a0fbcebf8c4e6993582956ff9469624c1: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:50.046519 containerd[1556]: time="2025-11-06T00:29:50.046482144Z" level=info msg="CreateContainer within sandbox \"8ad0e2b034acb86612016294db0054901e45a46c7f9a2bcce9371347a4288cf2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51ffb06e13c81f8d5485383bdc65c46a0fbcebf8c4e6993582956ff9469624c1\"" Nov 6 00:29:50.048191 containerd[1556]: time="2025-11-06T00:29:50.047422841Z" level=info msg="StartContainer for \"51ffb06e13c81f8d5485383bdc65c46a0fbcebf8c4e6993582956ff9469624c1\"" Nov 6 00:29:50.048949 containerd[1556]: time="2025-11-06T00:29:50.048928027Z" level=info msg="connecting to shim 51ffb06e13c81f8d5485383bdc65c46a0fbcebf8c4e6993582956ff9469624c1" address="unix:///run/containerd/s/1aa17e954ef638671744a81c663f7a8c727663365775e1bc9818f1c10869b8b7" protocol=ttrpc version=3 Nov 6 00:29:50.062017 containerd[1556]: time="2025-11-06T00:29:50.061981029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7x6rd,Uid:c8bfaca5-2504-4b60-ba9a-08da0b87d25b,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:29:50.074292 systemd[1]: Started cri-containerd-51ffb06e13c81f8d5485383bdc65c46a0fbcebf8c4e6993582956ff9469624c1.scope - libcontainer container 51ffb06e13c81f8d5485383bdc65c46a0fbcebf8c4e6993582956ff9469624c1. Nov 6 00:29:50.088177 containerd[1556]: time="2025-11-06T00:29:50.087352606Z" level=info msg="connecting to shim 6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a" address="unix:///run/containerd/s/483656e7e22bc99be5bf6d749ceb72c12683b2bf6866b2ee08904d19c8cb6960" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:50.123325 systemd[1]: Started cri-containerd-6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a.scope - libcontainer container 6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a. Nov 6 00:29:50.145507 containerd[1556]: time="2025-11-06T00:29:50.145401418Z" level=info msg="StartContainer for \"51ffb06e13c81f8d5485383bdc65c46a0fbcebf8c4e6993582956ff9469624c1\" returns successfully" Nov 6 00:29:50.193860 containerd[1556]: time="2025-11-06T00:29:50.193723723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7x6rd,Uid:c8bfaca5-2504-4b60-ba9a-08da0b87d25b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a\"" Nov 6 00:29:50.197581 containerd[1556]: time="2025-11-06T00:29:50.197549409Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:29:50.517659 kubelet[2741]: E1106 00:29:50.517231 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:50.941811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674505201.mount: Deactivated successfully. Nov 6 00:29:51.506939 containerd[1556]: time="2025-11-06T00:29:51.506243124Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:51.506939 containerd[1556]: time="2025-11-06T00:29:51.506901598Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:29:51.507593 containerd[1556]: time="2025-11-06T00:29:51.507542980Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:51.509510 containerd[1556]: time="2025-11-06T00:29:51.509471588Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:51.510303 containerd[1556]: time="2025-11-06T00:29:51.510279346Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.312698543s" Nov 6 00:29:51.510412 containerd[1556]: time="2025-11-06T00:29:51.510352993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:29:51.516015 containerd[1556]: time="2025-11-06T00:29:51.515977509Z" level=info msg="CreateContainer within sandbox \"6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:29:51.527920 containerd[1556]: time="2025-11-06T00:29:51.527875035Z" level=info msg="Container fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:51.530945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726363209.mount: Deactivated successfully. Nov 6 00:29:51.541884 containerd[1556]: time="2025-11-06T00:29:51.541805978Z" level=info msg="CreateContainer within sandbox \"6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee\"" Nov 6 00:29:51.543067 containerd[1556]: time="2025-11-06T00:29:51.543016846Z" level=info msg="StartContainer for \"fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee\"" Nov 6 00:29:51.544726 containerd[1556]: time="2025-11-06T00:29:51.544680847Z" level=info msg="connecting to shim fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee" address="unix:///run/containerd/s/483656e7e22bc99be5bf6d749ceb72c12683b2bf6866b2ee08904d19c8cb6960" protocol=ttrpc version=3 Nov 6 00:29:51.576315 systemd[1]: Started cri-containerd-fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee.scope - libcontainer container fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee. Nov 6 00:29:51.623055 containerd[1556]: time="2025-11-06T00:29:51.623012326Z" level=info msg="StartContainer for \"fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee\" returns successfully" Nov 6 00:29:52.535465 kubelet[2741]: I1106 00:29:52.535416 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j6lwq" podStartSLOduration=3.535239971 podStartE2EDuration="3.535239971s" podCreationTimestamp="2025-11-06 00:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:50.528923349 +0000 UTC m=+8.219530353" watchObservedRunningTime="2025-11-06 00:29:52.535239971 +0000 UTC m=+10.225846975" Nov 6 00:29:52.536000 kubelet[2741]: I1106 00:29:52.535885 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-7x6rd" podStartSLOduration=2.219766111 podStartE2EDuration="3.535841557s" podCreationTimestamp="2025-11-06 00:29:49 +0000 UTC" firstStartedPulling="2025-11-06 00:29:50.195392106 +0000 UTC m=+7.885999110" lastFinishedPulling="2025-11-06 00:29:51.511467551 +0000 UTC m=+9.202074556" observedRunningTime="2025-11-06 00:29:52.535791572 +0000 UTC m=+10.226398576" watchObservedRunningTime="2025-11-06 00:29:52.535841557 +0000 UTC m=+10.226448561" Nov 6 00:29:52.548068 kubelet[2741]: E1106 00:29:52.548041 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:53.524905 kubelet[2741]: E1106 00:29:53.524862 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:54.390504 systemd[1]: cri-containerd-fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee.scope: Deactivated successfully. Nov 6 00:29:54.394805 containerd[1556]: time="2025-11-06T00:29:54.394762921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee\" id:\"fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee\" pid:3066 exit_status:1 exited_at:{seconds:1762388994 nanos:394020402}" Nov 6 00:29:54.395980 containerd[1556]: time="2025-11-06T00:29:54.395891602Z" level=info msg="received exit event container_id:\"fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee\" id:\"fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee\" pid:3066 exit_status:1 exited_at:{seconds:1762388994 nanos:394020402}" Nov 6 00:29:54.444935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee-rootfs.mount: Deactivated successfully. Nov 6 00:29:54.791305 update_engine[1533]: I20251106 00:29:54.791217 1533 update_attempter.cc:509] Updating boot flags... Nov 6 00:29:55.116811 kubelet[2741]: E1106 00:29:55.116652 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:55.531196 kubelet[2741]: I1106 00:29:55.531111 2741 scope.go:117] "RemoveContainer" containerID="fd6567268ef4d7f3b7a6e3da013553a2adeb7d9dae388ff6e0c59783660080ee" Nov 6 00:29:55.531616 kubelet[2741]: E1106 00:29:55.531371 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:55.534973 containerd[1556]: time="2025-11-06T00:29:55.534219542Z" level=info msg="CreateContainer within sandbox \"6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 6 00:29:55.543776 containerd[1556]: time="2025-11-06T00:29:55.543713075Z" level=info msg="Container 28add5d4355012e1c96a5b2e07099bd01178a4212f9b431cfbdf674ae58cf407: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:55.548735 containerd[1556]: time="2025-11-06T00:29:55.548684529Z" level=info msg="CreateContainer within sandbox \"6ed3f6b4a5fade398815eb4f8adf00a0d29056bcec85b0a7f19607d50b4a665a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"28add5d4355012e1c96a5b2e07099bd01178a4212f9b431cfbdf674ae58cf407\"" Nov 6 00:29:55.552170 containerd[1556]: time="2025-11-06T00:29:55.549420003Z" level=info msg="StartContainer for \"28add5d4355012e1c96a5b2e07099bd01178a4212f9b431cfbdf674ae58cf407\"" Nov 6 00:29:55.552170 containerd[1556]: time="2025-11-06T00:29:55.550803228Z" level=info msg="connecting to shim 28add5d4355012e1c96a5b2e07099bd01178a4212f9b431cfbdf674ae58cf407" address="unix:///run/containerd/s/483656e7e22bc99be5bf6d749ceb72c12683b2bf6866b2ee08904d19c8cb6960" protocol=ttrpc version=3 Nov 6 00:29:55.579850 systemd[1]: Started cri-containerd-28add5d4355012e1c96a5b2e07099bd01178a4212f9b431cfbdf674ae58cf407.scope - libcontainer container 28add5d4355012e1c96a5b2e07099bd01178a4212f9b431cfbdf674ae58cf407. Nov 6 00:29:55.619611 containerd[1556]: time="2025-11-06T00:29:55.619518551Z" level=info msg="StartContainer for \"28add5d4355012e1c96a5b2e07099bd01178a4212f9b431cfbdf674ae58cf407\" returns successfully" Nov 6 00:29:56.979657 kubelet[2741]: E1106 00:29:56.979591 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:29:57.573527 sudo[1797]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:57.628418 sshd[1796]: Connection closed by 139.178.89.65 port 45782 Nov 6 00:29:57.629460 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:57.636467 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:29:57.637027 systemd[1]: sshd@6-172.232.1.216:22-139.178.89.65:45782.service: Deactivated successfully. Nov 6 00:29:57.639956 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:29:57.640241 systemd[1]: session-7.scope: Consumed 5.075s CPU time, 233M memory peak. Nov 6 00:29:57.643662 systemd-logind[1532]: Removed session 7. Nov 6 00:30:03.899552 systemd[1]: Created slice kubepods-besteffort-pod202a5497_383b_4691_8dec_6f92433dd9f0.slice - libcontainer container kubepods-besteffort-pod202a5497_383b_4691_8dec_6f92433dd9f0.slice. Nov 6 00:30:03.986818 kubelet[2741]: I1106 00:30:03.986752 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/202a5497-383b-4691-8dec-6f92433dd9f0-tigera-ca-bundle\") pod \"calico-typha-bcd5c775c-j6xsg\" (UID: \"202a5497-383b-4691-8dec-6f92433dd9f0\") " pod="calico-system/calico-typha-bcd5c775c-j6xsg" Nov 6 00:30:03.986818 kubelet[2741]: I1106 00:30:03.986795 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/202a5497-383b-4691-8dec-6f92433dd9f0-typha-certs\") pod \"calico-typha-bcd5c775c-j6xsg\" (UID: \"202a5497-383b-4691-8dec-6f92433dd9f0\") " pod="calico-system/calico-typha-bcd5c775c-j6xsg" Nov 6 00:30:03.986818 kubelet[2741]: I1106 00:30:03.986817 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szlws\" (UniqueName: \"kubernetes.io/projected/202a5497-383b-4691-8dec-6f92433dd9f0-kube-api-access-szlws\") pod \"calico-typha-bcd5c775c-j6xsg\" (UID: \"202a5497-383b-4691-8dec-6f92433dd9f0\") " pod="calico-system/calico-typha-bcd5c775c-j6xsg" Nov 6 00:30:04.208065 kubelet[2741]: E1106 00:30:04.207932 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:04.209137 containerd[1556]: time="2025-11-06T00:30:04.209103186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bcd5c775c-j6xsg,Uid:202a5497-383b-4691-8dec-6f92433dd9f0,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:04.227190 containerd[1556]: time="2025-11-06T00:30:04.226546224Z" level=info msg="connecting to shim d93766d3fe8d3d9a53e9ec32c078066f532e4a18b05ded2c1fd557d61233c0e8" address="unix:///run/containerd/s/a07831ea6ebf2b78566f2ac8bd83f19be76d5e3c9201f2839ea42197cb9b16e0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:04.261302 systemd[1]: Started cri-containerd-d93766d3fe8d3d9a53e9ec32c078066f532e4a18b05ded2c1fd557d61233c0e8.scope - libcontainer container d93766d3fe8d3d9a53e9ec32c078066f532e4a18b05ded2c1fd557d61233c0e8. Nov 6 00:30:04.312840 systemd[1]: Created slice kubepods-besteffort-podb3564aab_660c_4595_9599_de5f59c71a9d.slice - libcontainer container kubepods-besteffort-podb3564aab_660c_4595_9599_de5f59c71a9d.slice. Nov 6 00:30:04.355220 containerd[1556]: time="2025-11-06T00:30:04.355187460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bcd5c775c-j6xsg,Uid:202a5497-383b-4691-8dec-6f92433dd9f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d93766d3fe8d3d9a53e9ec32c078066f532e4a18b05ded2c1fd557d61233c0e8\"" Nov 6 00:30:04.356239 kubelet[2741]: E1106 00:30:04.355928 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:04.357645 containerd[1556]: time="2025-11-06T00:30:04.357628843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:30:04.389322 kubelet[2741]: I1106 00:30:04.389292 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-xtables-lock\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389525 kubelet[2741]: I1106 00:30:04.389420 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-policysync\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389525 kubelet[2741]: I1106 00:30:04.389445 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-cni-log-dir\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389525 kubelet[2741]: I1106 00:30:04.389465 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-flexvol-driver-host\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389525 kubelet[2741]: I1106 00:30:04.389479 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b3564aab-660c-4595-9599-de5f59c71a9d-node-certs\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389525 kubelet[2741]: I1106 00:30:04.389503 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvq6j\" (UniqueName: \"kubernetes.io/projected/b3564aab-660c-4595-9599-de5f59c71a9d-kube-api-access-hvq6j\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389685 kubelet[2741]: I1106 00:30:04.389547 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-cni-bin-dir\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389685 kubelet[2741]: I1106 00:30:04.389564 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-cni-net-dir\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389685 kubelet[2741]: I1106 00:30:04.389578 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-lib-modules\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389685 kubelet[2741]: I1106 00:30:04.389595 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3564aab-660c-4595-9599-de5f59c71a9d-tigera-ca-bundle\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389685 kubelet[2741]: I1106 00:30:04.389609 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-var-lib-calico\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.389795 kubelet[2741]: I1106 00:30:04.389628 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b3564aab-660c-4595-9599-de5f59c71a9d-var-run-calico\") pod \"calico-node-hdlwj\" (UID: \"b3564aab-660c-4595-9599-de5f59c71a9d\") " pod="calico-system/calico-node-hdlwj" Nov 6 00:30:04.484244 kubelet[2741]: E1106 00:30:04.483710 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:04.490299 kubelet[2741]: I1106 00:30:04.490269 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d99fbda4-0f0c-421c-a518-a4c5a391c340-socket-dir\") pod \"csi-node-driver-s88hx\" (UID: \"d99fbda4-0f0c-421c-a518-a4c5a391c340\") " pod="calico-system/csi-node-driver-s88hx" Nov 6 00:30:04.490369 kubelet[2741]: I1106 00:30:04.490304 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d99fbda4-0f0c-421c-a518-a4c5a391c340-registration-dir\") pod \"csi-node-driver-s88hx\" (UID: \"d99fbda4-0f0c-421c-a518-a4c5a391c340\") " pod="calico-system/csi-node-driver-s88hx" Nov 6 00:30:04.490369 kubelet[2741]: I1106 00:30:04.490333 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d99fbda4-0f0c-421c-a518-a4c5a391c340-kubelet-dir\") pod \"csi-node-driver-s88hx\" (UID: \"d99fbda4-0f0c-421c-a518-a4c5a391c340\") " pod="calico-system/csi-node-driver-s88hx" Nov 6 00:30:04.490369 kubelet[2741]: I1106 00:30:04.490348 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgm4\" (UniqueName: \"kubernetes.io/projected/d99fbda4-0f0c-421c-a518-a4c5a391c340-kube-api-access-kdgm4\") pod \"csi-node-driver-s88hx\" (UID: \"d99fbda4-0f0c-421c-a518-a4c5a391c340\") " pod="calico-system/csi-node-driver-s88hx" Nov 6 00:30:04.490369 kubelet[2741]: I1106 00:30:04.490369 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d99fbda4-0f0c-421c-a518-a4c5a391c340-varrun\") pod \"csi-node-driver-s88hx\" (UID: \"d99fbda4-0f0c-421c-a518-a4c5a391c340\") " pod="calico-system/csi-node-driver-s88hx" Nov 6 00:30:04.491481 kubelet[2741]: E1106 00:30:04.491432 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.491534 kubelet[2741]: W1106 00:30:04.491495 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.491534 kubelet[2741]: E1106 00:30:04.491520 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.491887 kubelet[2741]: E1106 00:30:04.491864 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.491887 kubelet[2741]: W1106 00:30:04.491882 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.491942 kubelet[2741]: E1106 00:30:04.491894 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.492678 kubelet[2741]: E1106 00:30:04.492655 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.492678 kubelet[2741]: W1106 00:30:04.492673 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.492748 kubelet[2741]: E1106 00:30:04.492685 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.494784 kubelet[2741]: E1106 00:30:04.494747 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.494784 kubelet[2741]: W1106 00:30:04.494765 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.496182 kubelet[2741]: E1106 00:30:04.495557 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.496325 kubelet[2741]: E1106 00:30:04.496303 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.496325 kubelet[2741]: W1106 00:30:04.496321 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.497719 kubelet[2741]: E1106 00:30:04.497695 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.498580 kubelet[2741]: E1106 00:30:04.498534 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.498580 kubelet[2741]: W1106 00:30:04.498553 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.498580 kubelet[2741]: E1106 00:30:04.498565 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.500011 kubelet[2741]: E1106 00:30:04.499985 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.500011 kubelet[2741]: W1106 00:30:04.500003 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.500087 kubelet[2741]: E1106 00:30:04.500016 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.501802 kubelet[2741]: E1106 00:30:04.501748 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.501802 kubelet[2741]: W1106 00:30:04.501766 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.501802 kubelet[2741]: E1106 00:30:04.501778 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.503369 kubelet[2741]: E1106 00:30:04.503343 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.503369 kubelet[2741]: W1106 00:30:04.503363 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.503760 kubelet[2741]: E1106 00:30:04.503712 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.505261 kubelet[2741]: E1106 00:30:04.505216 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.505261 kubelet[2741]: W1106 00:30:04.505235 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.505261 kubelet[2741]: E1106 00:30:04.505247 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.505591 kubelet[2741]: E1106 00:30:04.505564 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.505657 kubelet[2741]: W1106 00:30:04.505612 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.505657 kubelet[2741]: E1106 00:30:04.505625 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.505976 kubelet[2741]: E1106 00:30:04.505955 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.506064 kubelet[2741]: W1106 00:30:04.506003 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.506064 kubelet[2741]: E1106 00:30:04.506016 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.506403 kubelet[2741]: E1106 00:30:04.506340 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.506403 kubelet[2741]: W1106 00:30:04.506384 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.506403 kubelet[2741]: E1106 00:30:04.506398 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.507039 kubelet[2741]: E1106 00:30:04.507023 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.507039 kubelet[2741]: W1106 00:30:04.507038 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.507099 kubelet[2741]: E1106 00:30:04.507048 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.507675 kubelet[2741]: E1106 00:30:04.507652 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.507726 kubelet[2741]: W1106 00:30:04.507709 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.507726 kubelet[2741]: E1106 00:30:04.507722 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.508514 kubelet[2741]: E1106 00:30:04.508386 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.508561 kubelet[2741]: W1106 00:30:04.508397 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.508561 kubelet[2741]: E1106 00:30:04.508544 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.508917 kubelet[2741]: E1106 00:30:04.508895 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.508917 kubelet[2741]: W1106 00:30:04.508908 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.508917 kubelet[2741]: E1106 00:30:04.508917 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.509556 kubelet[2741]: E1106 00:30:04.509534 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.509556 kubelet[2741]: W1106 00:30:04.509551 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.509627 kubelet[2741]: E1106 00:30:04.509606 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.510081 kubelet[2741]: E1106 00:30:04.509937 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.510081 kubelet[2741]: W1106 00:30:04.509988 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.510081 kubelet[2741]: E1106 00:30:04.510000 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.510638 kubelet[2741]: E1106 00:30:04.510590 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.510638 kubelet[2741]: W1106 00:30:04.510605 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.510692 kubelet[2741]: E1106 00:30:04.510655 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.511169 kubelet[2741]: E1106 00:30:04.511107 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.511169 kubelet[2741]: W1106 00:30:04.511122 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.511169 kubelet[2741]: E1106 00:30:04.511130 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.511533 kubelet[2741]: E1106 00:30:04.511479 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.511533 kubelet[2741]: W1106 00:30:04.511496 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.511533 kubelet[2741]: E1106 00:30:04.511508 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.512187 kubelet[2741]: E1106 00:30:04.511829 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.512187 kubelet[2741]: W1106 00:30:04.511840 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.512187 kubelet[2741]: E1106 00:30:04.511848 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.512458 kubelet[2741]: E1106 00:30:04.512223 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.512458 kubelet[2741]: W1106 00:30:04.512232 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.512458 kubelet[2741]: E1106 00:30:04.512240 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.512777 kubelet[2741]: E1106 00:30:04.512756 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.512777 kubelet[2741]: W1106 00:30:04.512773 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.512830 kubelet[2741]: E1106 00:30:04.512784 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.513116 kubelet[2741]: E1106 00:30:04.513095 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.513116 kubelet[2741]: W1106 00:30:04.513112 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.513380 kubelet[2741]: E1106 00:30:04.513363 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.513736 kubelet[2741]: E1106 00:30:04.513708 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.513736 kubelet[2741]: W1106 00:30:04.513728 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.513805 kubelet[2741]: E1106 00:30:04.513739 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.513976 kubelet[2741]: E1106 00:30:04.513949 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.513976 kubelet[2741]: W1106 00:30:04.513968 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.514044 kubelet[2741]: E1106 00:30:04.513980 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.515056 kubelet[2741]: E1106 00:30:04.515028 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.515056 kubelet[2741]: W1106 00:30:04.515049 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.515216 kubelet[2741]: E1106 00:30:04.515061 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.515380 kubelet[2741]: E1106 00:30:04.515360 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.515444 kubelet[2741]: W1106 00:30:04.515381 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.515444 kubelet[2741]: E1106 00:30:04.515392 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.515812 kubelet[2741]: E1106 00:30:04.515762 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.515812 kubelet[2741]: W1106 00:30:04.515778 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.515812 kubelet[2741]: E1106 00:30:04.515790 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.517005 kubelet[2741]: E1106 00:30:04.516968 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.517005 kubelet[2741]: W1106 00:30:04.516984 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.517005 kubelet[2741]: E1106 00:30:04.516996 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.517681 kubelet[2741]: E1106 00:30:04.517661 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.517681 kubelet[2741]: W1106 00:30:04.517677 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.517681 kubelet[2741]: E1106 00:30:04.517688 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.521649 kubelet[2741]: E1106 00:30:04.521294 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.521649 kubelet[2741]: W1106 00:30:04.521307 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.521649 kubelet[2741]: E1106 00:30:04.521318 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.527013 kubelet[2741]: E1106 00:30:04.525596 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.527088 kubelet[2741]: W1106 00:30:04.527072 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.527136 kubelet[2741]: E1106 00:30:04.527125 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.527699 kubelet[2741]: E1106 00:30:04.527677 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.527699 kubelet[2741]: W1106 00:30:04.527696 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.527771 kubelet[2741]: E1106 00:30:04.527709 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.591472 kubelet[2741]: E1106 00:30:04.591246 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.591472 kubelet[2741]: W1106 00:30:04.591270 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.591589 kubelet[2741]: E1106 00:30:04.591480 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.591810 kubelet[2741]: E1106 00:30:04.591784 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.591810 kubelet[2741]: W1106 00:30:04.591803 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.591877 kubelet[2741]: E1106 00:30:04.591815 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.592170 kubelet[2741]: E1106 00:30:04.592134 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.592207 kubelet[2741]: W1106 00:30:04.592172 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.592207 kubelet[2741]: E1106 00:30:04.592187 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.592535 kubelet[2741]: E1106 00:30:04.592517 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.592535 kubelet[2741]: W1106 00:30:04.592533 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.592601 kubelet[2741]: E1106 00:30:04.592544 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.592845 kubelet[2741]: E1106 00:30:04.592826 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.592845 kubelet[2741]: W1106 00:30:04.592842 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.592905 kubelet[2741]: E1106 00:30:04.592854 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.593500 kubelet[2741]: E1106 00:30:04.593406 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.593500 kubelet[2741]: W1106 00:30:04.593418 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.593500 kubelet[2741]: E1106 00:30:04.593426 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.593819 kubelet[2741]: E1106 00:30:04.593752 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.593819 kubelet[2741]: W1106 00:30:04.593814 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.593876 kubelet[2741]: E1106 00:30:04.593826 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.594505 kubelet[2741]: E1106 00:30:04.594464 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.594554 kubelet[2741]: W1106 00:30:04.594508 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.594554 kubelet[2741]: E1106 00:30:04.594521 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.595053 kubelet[2741]: E1106 00:30:04.594786 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.595053 kubelet[2741]: W1106 00:30:04.594802 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.595053 kubelet[2741]: E1106 00:30:04.594815 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.595260 kubelet[2741]: E1106 00:30:04.595224 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.595260 kubelet[2741]: W1106 00:30:04.595241 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.595326 kubelet[2741]: E1106 00:30:04.595252 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.595767 kubelet[2741]: E1106 00:30:04.595633 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.595767 kubelet[2741]: W1106 00:30:04.595646 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.595767 kubelet[2741]: E1106 00:30:04.595654 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.596045 kubelet[2741]: E1106 00:30:04.596022 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.596090 kubelet[2741]: W1106 00:30:04.596075 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.596122 kubelet[2741]: E1106 00:30:04.596088 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.596486 kubelet[2741]: E1106 00:30:04.596430 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.596486 kubelet[2741]: W1106 00:30:04.596441 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.596486 kubelet[2741]: E1106 00:30:04.596449 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.596828 kubelet[2741]: E1106 00:30:04.596810 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.596828 kubelet[2741]: W1106 00:30:04.596824 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.596893 kubelet[2741]: E1106 00:30:04.596837 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.597369 kubelet[2741]: E1106 00:30:04.597338 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.597407 kubelet[2741]: W1106 00:30:04.597355 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.597407 kubelet[2741]: E1106 00:30:04.597402 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.597877 kubelet[2741]: E1106 00:30:04.597774 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.597877 kubelet[2741]: W1106 00:30:04.597785 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.597877 kubelet[2741]: E1106 00:30:04.597793 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.598606 kubelet[2741]: E1106 00:30:04.598547 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.598606 kubelet[2741]: W1106 00:30:04.598598 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.598835 kubelet[2741]: E1106 00:30:04.598611 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.598995 kubelet[2741]: E1106 00:30:04.598976 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.598995 kubelet[2741]: W1106 00:30:04.598991 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.599056 kubelet[2741]: E1106 00:30:04.599002 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.599442 kubelet[2741]: E1106 00:30:04.599423 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.599442 kubelet[2741]: W1106 00:30:04.599439 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.599510 kubelet[2741]: E1106 00:30:04.599451 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.599726 kubelet[2741]: E1106 00:30:04.599708 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.599726 kubelet[2741]: W1106 00:30:04.599724 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.600410 kubelet[2741]: E1106 00:30:04.599735 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.600410 kubelet[2741]: E1106 00:30:04.600010 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.600410 kubelet[2741]: W1106 00:30:04.600040 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.600410 kubelet[2741]: E1106 00:30:04.600052 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.600746 kubelet[2741]: E1106 00:30:04.600611 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.600746 kubelet[2741]: W1106 00:30:04.600623 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.600746 kubelet[2741]: E1106 00:30:04.600631 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.601060 kubelet[2741]: E1106 00:30:04.601041 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.601060 kubelet[2741]: W1106 00:30:04.601056 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.601113 kubelet[2741]: E1106 00:30:04.601067 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.601426 kubelet[2741]: E1106 00:30:04.601408 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.601471 kubelet[2741]: W1106 00:30:04.601459 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.601499 kubelet[2741]: E1106 00:30:04.601471 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.601853 kubelet[2741]: E1106 00:30:04.601834 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.601853 kubelet[2741]: W1106 00:30:04.601850 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.601907 kubelet[2741]: E1106 00:30:04.601864 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.611845 kubelet[2741]: E1106 00:30:04.611798 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:04.611845 kubelet[2741]: W1106 00:30:04.611812 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:04.611845 kubelet[2741]: E1106 00:30:04.611822 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:04.620183 kubelet[2741]: E1106 00:30:04.620103 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:04.620675 containerd[1556]: time="2025-11-06T00:30:04.620637567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdlwj,Uid:b3564aab-660c-4595-9599-de5f59c71a9d,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:04.639005 containerd[1556]: time="2025-11-06T00:30:04.638757733Z" level=info msg="connecting to shim 07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c" address="unix:///run/containerd/s/cfdbaffe03447f47cf9850707de7145238fa916027ef8a21fc41b33deee9be30" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:04.667280 systemd[1]: Started cri-containerd-07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c.scope - libcontainer container 07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c. Nov 6 00:30:04.694289 containerd[1556]: time="2025-11-06T00:30:04.694213806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdlwj,Uid:b3564aab-660c-4595-9599-de5f59c71a9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c\"" Nov 6 00:30:04.696803 kubelet[2741]: E1106 00:30:04.696777 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:05.258818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006654461.mount: Deactivated successfully. Nov 6 00:30:06.191459 containerd[1556]: time="2025-11-06T00:30:06.191373242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:06.192321 containerd[1556]: time="2025-11-06T00:30:06.192062268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:30:06.192852 containerd[1556]: time="2025-11-06T00:30:06.192820406Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:06.194485 containerd[1556]: time="2025-11-06T00:30:06.194442626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:06.195240 containerd[1556]: time="2025-11-06T00:30:06.195210765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.837429315s" Nov 6 00:30:06.195313 containerd[1556]: time="2025-11-06T00:30:06.195298938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:30:06.196384 containerd[1556]: time="2025-11-06T00:30:06.196347017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:30:06.215225 containerd[1556]: time="2025-11-06T00:30:06.215184068Z" level=info msg="CreateContainer within sandbox \"d93766d3fe8d3d9a53e9ec32c078066f532e4a18b05ded2c1fd557d61233c0e8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:30:06.224210 containerd[1556]: time="2025-11-06T00:30:06.222125676Z" level=info msg="Container ea04ac9db0d9286ee2074cb28d5e9a34a67430a74ff656e77d891ef297b39c4d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:06.230534 containerd[1556]: time="2025-11-06T00:30:06.230486947Z" level=info msg="CreateContainer within sandbox \"d93766d3fe8d3d9a53e9ec32c078066f532e4a18b05ded2c1fd557d61233c0e8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea04ac9db0d9286ee2074cb28d5e9a34a67430a74ff656e77d891ef297b39c4d\"" Nov 6 00:30:06.231038 containerd[1556]: time="2025-11-06T00:30:06.230991206Z" level=info msg="StartContainer for \"ea04ac9db0d9286ee2074cb28d5e9a34a67430a74ff656e77d891ef297b39c4d\"" Nov 6 00:30:06.232905 containerd[1556]: time="2025-11-06T00:30:06.232849745Z" level=info msg="connecting to shim ea04ac9db0d9286ee2074cb28d5e9a34a67430a74ff656e77d891ef297b39c4d" address="unix:///run/containerd/s/a07831ea6ebf2b78566f2ac8bd83f19be76d5e3c9201f2839ea42197cb9b16e0" protocol=ttrpc version=3 Nov 6 00:30:06.264634 systemd[1]: Started cri-containerd-ea04ac9db0d9286ee2074cb28d5e9a34a67430a74ff656e77d891ef297b39c4d.scope - libcontainer container ea04ac9db0d9286ee2074cb28d5e9a34a67430a74ff656e77d891ef297b39c4d. Nov 6 00:30:06.328936 containerd[1556]: time="2025-11-06T00:30:06.328887238Z" level=info msg="StartContainer for \"ea04ac9db0d9286ee2074cb28d5e9a34a67430a74ff656e77d891ef297b39c4d\" returns successfully" Nov 6 00:30:06.462051 kubelet[2741]: E1106 00:30:06.461898 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:06.569023 kubelet[2741]: E1106 00:30:06.568912 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:06.598029 kubelet[2741]: E1106 00:30:06.597906 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.598029 kubelet[2741]: W1106 00:30:06.597960 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.598029 kubelet[2741]: E1106 00:30:06.597977 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.598884 kubelet[2741]: E1106 00:30:06.598592 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.598884 kubelet[2741]: W1106 00:30:06.598601 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.598884 kubelet[2741]: E1106 00:30:06.598641 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.599162 kubelet[2741]: E1106 00:30:06.599030 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.599162 kubelet[2741]: W1106 00:30:06.599041 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.599162 kubelet[2741]: E1106 00:30:06.599083 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.600433 kubelet[2741]: E1106 00:30:06.600326 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.600433 kubelet[2741]: W1106 00:30:06.600339 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.600433 kubelet[2741]: E1106 00:30:06.600348 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.602168 kubelet[2741]: E1106 00:30:06.601339 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.602168 kubelet[2741]: W1106 00:30:06.601367 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.602273 kubelet[2741]: E1106 00:30:06.602257 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.603053 kubelet[2741]: E1106 00:30:06.602896 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.603053 kubelet[2741]: W1106 00:30:06.602908 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.603053 kubelet[2741]: E1106 00:30:06.602918 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.603510 kubelet[2741]: E1106 00:30:06.603444 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.603601 kubelet[2741]: W1106 00:30:06.603586 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.603754 kubelet[2741]: E1106 00:30:06.603649 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.604204 kubelet[2741]: E1106 00:30:06.604192 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.604593 kubelet[2741]: W1106 00:30:06.604459 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.604593 kubelet[2741]: E1106 00:30:06.604473 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.606949 kubelet[2741]: E1106 00:30:06.606821 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.606949 kubelet[2741]: W1106 00:30:06.606834 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.606949 kubelet[2741]: E1106 00:30:06.606846 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.608112 kubelet[2741]: E1106 00:30:06.608017 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.608112 kubelet[2741]: W1106 00:30:06.608030 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.608112 kubelet[2741]: E1106 00:30:06.608039 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.608757 kubelet[2741]: E1106 00:30:06.608600 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.608757 kubelet[2741]: W1106 00:30:06.608612 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.608757 kubelet[2741]: E1106 00:30:06.608622 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.609380 kubelet[2741]: E1106 00:30:06.609338 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.609474 kubelet[2741]: W1106 00:30:06.609459 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.609579 kubelet[2741]: E1106 00:30:06.609538 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.610427 kubelet[2741]: E1106 00:30:06.610272 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.610538 kubelet[2741]: W1106 00:30:06.610479 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.610538 kubelet[2741]: E1106 00:30:06.610495 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.611633 kubelet[2741]: E1106 00:30:06.611507 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.611633 kubelet[2741]: W1106 00:30:06.611520 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.611633 kubelet[2741]: E1106 00:30:06.611542 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.612298 kubelet[2741]: E1106 00:30:06.612062 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.612298 kubelet[2741]: W1106 00:30:06.612072 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.612298 kubelet[2741]: E1106 00:30:06.612082 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.612884 kubelet[2741]: E1106 00:30:06.612828 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.612884 kubelet[2741]: W1106 00:30:06.612840 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.612884 kubelet[2741]: E1106 00:30:06.612850 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.613998 kubelet[2741]: E1106 00:30:06.613963 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.613998 kubelet[2741]: W1106 00:30:06.613976 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.613998 kubelet[2741]: E1106 00:30:06.613985 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.614630 kubelet[2741]: E1106 00:30:06.614425 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.614630 kubelet[2741]: W1106 00:30:06.614436 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.614630 kubelet[2741]: E1106 00:30:06.614448 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.614950 kubelet[2741]: E1106 00:30:06.614919 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.614950 kubelet[2741]: W1106 00:30:06.614930 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.614950 kubelet[2741]: E1106 00:30:06.614939 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.616679 kubelet[2741]: E1106 00:30:06.616663 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.616786 kubelet[2741]: W1106 00:30:06.616726 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.616786 kubelet[2741]: E1106 00:30:06.616739 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.617384 kubelet[2741]: E1106 00:30:06.617350 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.617384 kubelet[2741]: W1106 00:30:06.617362 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.617384 kubelet[2741]: E1106 00:30:06.617372 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.617822 kubelet[2741]: E1106 00:30:06.617790 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.617822 kubelet[2741]: W1106 00:30:06.617801 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.617822 kubelet[2741]: E1106 00:30:06.617810 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.618225 kubelet[2741]: E1106 00:30:06.618128 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.618225 kubelet[2741]: W1106 00:30:06.618139 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.618225 kubelet[2741]: E1106 00:30:06.618178 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.619110 kubelet[2741]: E1106 00:30:06.619073 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.619199 kubelet[2741]: W1106 00:30:06.619106 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.619199 kubelet[2741]: E1106 00:30:06.619135 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.620708 kubelet[2741]: E1106 00:30:06.620675 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.620708 kubelet[2741]: W1106 00:30:06.620702 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.620784 kubelet[2741]: E1106 00:30:06.620719 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.622484 kubelet[2741]: E1106 00:30:06.622466 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.622484 kubelet[2741]: W1106 00:30:06.622481 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.622542 kubelet[2741]: E1106 00:30:06.622493 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.622769 kubelet[2741]: E1106 00:30:06.622748 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.622769 kubelet[2741]: W1106 00:30:06.622761 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.622769 kubelet[2741]: E1106 00:30:06.622769 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.623001 kubelet[2741]: E1106 00:30:06.622982 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.623001 kubelet[2741]: W1106 00:30:06.622994 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.623001 kubelet[2741]: E1106 00:30:06.623002 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.623675 kubelet[2741]: E1106 00:30:06.623641 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.623675 kubelet[2741]: W1106 00:30:06.623660 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.623675 kubelet[2741]: E1106 00:30:06.623674 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.624105 kubelet[2741]: E1106 00:30:06.624078 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.624105 kubelet[2741]: W1106 00:30:06.624099 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.624203 kubelet[2741]: E1106 00:30:06.624112 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.625460 kubelet[2741]: E1106 00:30:06.625432 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.625460 kubelet[2741]: W1106 00:30:06.625452 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.625536 kubelet[2741]: E1106 00:30:06.625465 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.625958 kubelet[2741]: E1106 00:30:06.625931 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.625958 kubelet[2741]: W1106 00:30:06.625950 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.626021 kubelet[2741]: E1106 00:30:06.625962 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.626205 kubelet[2741]: E1106 00:30:06.626178 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:30:06.626205 kubelet[2741]: W1106 00:30:06.626198 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:30:06.626321 kubelet[2741]: E1106 00:30:06.626210 2741 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:30:06.934692 containerd[1556]: time="2025-11-06T00:30:06.934615313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:06.935414 containerd[1556]: time="2025-11-06T00:30:06.935309418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:30:06.935950 containerd[1556]: time="2025-11-06T00:30:06.935911810Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:06.937814 containerd[1556]: time="2025-11-06T00:30:06.937779500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:06.938703 containerd[1556]: time="2025-11-06T00:30:06.938674274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 742.060786ms" Nov 6 00:30:06.938789 containerd[1556]: time="2025-11-06T00:30:06.938773407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:30:06.943516 containerd[1556]: time="2025-11-06T00:30:06.943485882Z" level=info msg="CreateContainer within sandbox \"07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:30:06.953071 containerd[1556]: time="2025-11-06T00:30:06.952302970Z" level=info msg="Container fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:06.957894 containerd[1556]: time="2025-11-06T00:30:06.957855127Z" level=info msg="CreateContainer within sandbox \"07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a\"" Nov 6 00:30:06.958428 containerd[1556]: time="2025-11-06T00:30:06.958374876Z" level=info msg="StartContainer for \"fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a\"" Nov 6 00:30:06.961036 containerd[1556]: time="2025-11-06T00:30:06.961004164Z" level=info msg="connecting to shim fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a" address="unix:///run/containerd/s/cfdbaffe03447f47cf9850707de7145238fa916027ef8a21fc41b33deee9be30" protocol=ttrpc version=3 Nov 6 00:30:06.990282 systemd[1]: Started cri-containerd-fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a.scope - libcontainer container fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a. Nov 6 00:30:07.047532 containerd[1556]: time="2025-11-06T00:30:07.047487465Z" level=info msg="StartContainer for \"fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a\" returns successfully" Nov 6 00:30:07.064765 systemd[1]: cri-containerd-fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a.scope: Deactivated successfully. Nov 6 00:30:07.067704 containerd[1556]: time="2025-11-06T00:30:07.067679310Z" level=info msg="received exit event container_id:\"fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a\" id:\"fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a\" pid:3477 exited_at:{seconds:1762389007 nanos:67098160}" Nov 6 00:30:07.068726 containerd[1556]: time="2025-11-06T00:30:07.068698026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a\" id:\"fb52fef7451c7b283b3eeefa634cb765e00d8f5b5e7865e7bc5bbcc5ca2acf5a\" pid:3477 exited_at:{seconds:1762389007 nanos:67098160}" Nov 6 00:30:07.575307 kubelet[2741]: I1106 00:30:07.575257 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:30:07.577382 kubelet[2741]: E1106 00:30:07.575747 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:07.577561 kubelet[2741]: E1106 00:30:07.577538 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:07.580294 containerd[1556]: time="2025-11-06T00:30:07.580240002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:30:07.594909 kubelet[2741]: I1106 00:30:07.594843 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bcd5c775c-j6xsg" podStartSLOduration=2.755856025 podStartE2EDuration="4.594823001s" podCreationTimestamp="2025-11-06 00:30:03 +0000 UTC" firstStartedPulling="2025-11-06 00:30:04.357108741 +0000 UTC m=+22.047715745" lastFinishedPulling="2025-11-06 00:30:06.196075717 +0000 UTC m=+23.886682721" observedRunningTime="2025-11-06 00:30:06.596304487 +0000 UTC m=+24.286911491" watchObservedRunningTime="2025-11-06 00:30:07.594823001 +0000 UTC m=+25.285430005" Nov 6 00:30:08.462813 kubelet[2741]: E1106 00:30:08.462751 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:09.756203 containerd[1556]: time="2025-11-06T00:30:09.756109161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:09.757113 containerd[1556]: time="2025-11-06T00:30:09.756858365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:30:09.757673 containerd[1556]: time="2025-11-06T00:30:09.757646039Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:09.759165 containerd[1556]: time="2025-11-06T00:30:09.759123875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:09.759836 containerd[1556]: time="2025-11-06T00:30:09.759802445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.17917902s" Nov 6 00:30:09.759880 containerd[1556]: time="2025-11-06T00:30:09.759837886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:30:09.769573 containerd[1556]: time="2025-11-06T00:30:09.769533184Z" level=info msg="CreateContainer within sandbox \"07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:30:09.779433 containerd[1556]: time="2025-11-06T00:30:09.779408428Z" level=info msg="Container f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:09.785585 containerd[1556]: time="2025-11-06T00:30:09.785542567Z" level=info msg="CreateContainer within sandbox \"07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239\"" Nov 6 00:30:09.786234 containerd[1556]: time="2025-11-06T00:30:09.786200247Z" level=info msg="StartContainer for \"f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239\"" Nov 6 00:30:09.787853 containerd[1556]: time="2025-11-06T00:30:09.787823217Z" level=info msg="connecting to shim f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239" address="unix:///run/containerd/s/cfdbaffe03447f47cf9850707de7145238fa916027ef8a21fc41b33deee9be30" protocol=ttrpc version=3 Nov 6 00:30:09.814295 systemd[1]: Started cri-containerd-f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239.scope - libcontainer container f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239. Nov 6 00:30:09.874760 containerd[1556]: time="2025-11-06T00:30:09.874722239Z" level=info msg="StartContainer for \"f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239\" returns successfully" Nov 6 00:30:10.386118 containerd[1556]: time="2025-11-06T00:30:10.386050290Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:30:10.391201 systemd[1]: cri-containerd-f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239.scope: Deactivated successfully. Nov 6 00:30:10.391529 systemd[1]: cri-containerd-f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239.scope: Consumed 563ms CPU time, 192.1M memory peak, 171.3M written to disk. Nov 6 00:30:10.392767 containerd[1556]: time="2025-11-06T00:30:10.392736242Z" level=info msg="received exit event container_id:\"f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239\" id:\"f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239\" pid:3536 exited_at:{seconds:1762389010 nanos:392261299}" Nov 6 00:30:10.394095 containerd[1556]: time="2025-11-06T00:30:10.393783673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239\" id:\"f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239\" pid:3536 exited_at:{seconds:1762389010 nanos:392261299}" Nov 6 00:30:10.421104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4932596a7ea05b5f31993e6a88939b6aa189c80b2408f8014c2f4704fe32239-rootfs.mount: Deactivated successfully. Nov 6 00:30:10.461403 kubelet[2741]: I1106 00:30:10.461363 2741 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 00:30:10.479660 systemd[1]: Created slice kubepods-besteffort-podd99fbda4_0f0c_421c_a518_a4c5a391c340.slice - libcontainer container kubepods-besteffort-podd99fbda4_0f0c_421c_a518_a4c5a391c340.slice. Nov 6 00:30:10.489066 containerd[1556]: time="2025-11-06T00:30:10.487762454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s88hx,Uid:d99fbda4-0f0c-421c-a518-a4c5a391c340,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:10.525972 systemd[1]: Created slice kubepods-besteffort-podb84c01a2_1d41_475e_8a7a_47755f9e00e7.slice - libcontainer container kubepods-besteffort-podb84c01a2_1d41_475e_8a7a_47755f9e00e7.slice. Nov 6 00:30:10.543719 kubelet[2741]: I1106 00:30:10.543435 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bzx\" (UniqueName: \"kubernetes.io/projected/b84c01a2-1d41-475e-8a7a-47755f9e00e7-kube-api-access-g4bzx\") pod \"calico-apiserver-765cfc7478-h4bdp\" (UID: \"b84c01a2-1d41-475e-8a7a-47755f9e00e7\") " pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" Nov 6 00:30:10.545170 kubelet[2741]: I1106 00:30:10.543472 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b84c01a2-1d41-475e-8a7a-47755f9e00e7-calico-apiserver-certs\") pod \"calico-apiserver-765cfc7478-h4bdp\" (UID: \"b84c01a2-1d41-475e-8a7a-47755f9e00e7\") " pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" Nov 6 00:30:10.545170 kubelet[2741]: I1106 00:30:10.543838 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b16b4278-934a-48e0-9503-bce8310d9168-whisker-backend-key-pair\") pod \"whisker-5fd49bfb84-zzvkv\" (UID: \"b16b4278-934a-48e0-9503-bce8310d9168\") " pod="calico-system/whisker-5fd49bfb84-zzvkv" Nov 6 00:30:10.545170 kubelet[2741]: I1106 00:30:10.543854 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkm8q\" (UniqueName: \"kubernetes.io/projected/b16b4278-934a-48e0-9503-bce8310d9168-kube-api-access-pkm8q\") pod \"whisker-5fd49bfb84-zzvkv\" (UID: \"b16b4278-934a-48e0-9503-bce8310d9168\") " pod="calico-system/whisker-5fd49bfb84-zzvkv" Nov 6 00:30:10.545170 kubelet[2741]: I1106 00:30:10.543883 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b16b4278-934a-48e0-9503-bce8310d9168-whisker-ca-bundle\") pod \"whisker-5fd49bfb84-zzvkv\" (UID: \"b16b4278-934a-48e0-9503-bce8310d9168\") " pod="calico-system/whisker-5fd49bfb84-zzvkv" Nov 6 00:30:10.545170 kubelet[2741]: I1106 00:30:10.543896 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8977de6-09a2-44c2-9dd7-aa57e7cd985b-config-volume\") pod \"coredns-66bc5c9577-5gb5m\" (UID: \"f8977de6-09a2-44c2-9dd7-aa57e7cd985b\") " pod="kube-system/coredns-66bc5c9577-5gb5m" Nov 6 00:30:10.545381 kubelet[2741]: I1106 00:30:10.543912 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chb8g\" (UniqueName: \"kubernetes.io/projected/f8977de6-09a2-44c2-9dd7-aa57e7cd985b-kube-api-access-chb8g\") pod \"coredns-66bc5c9577-5gb5m\" (UID: \"f8977de6-09a2-44c2-9dd7-aa57e7cd985b\") " pod="kube-system/coredns-66bc5c9577-5gb5m" Nov 6 00:30:10.549322 kubelet[2741]: E1106 00:30:10.547576 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:172-232-1-216\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-232-1-216' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-backend-key-pair\"" type="*v1.Secret" Nov 6 00:30:10.554171 systemd[1]: Created slice kubepods-besteffort-podb16b4278_934a_48e0_9503_bce8310d9168.slice - libcontainer container kubepods-besteffort-podb16b4278_934a_48e0_9503_bce8310d9168.slice. Nov 6 00:30:10.577773 systemd[1]: Created slice kubepods-burstable-podf8977de6_09a2_44c2_9dd7_aa57e7cd985b.slice - libcontainer container kubepods-burstable-podf8977de6_09a2_44c2_9dd7_aa57e7cd985b.slice. Nov 6 00:30:10.590419 containerd[1556]: time="2025-11-06T00:30:10.590372085Z" level=error msg="Failed to destroy network for sandbox \"5cbd213e6115bf1706a2a7700b805e65a78f4f29b55a792c2244572a706f5f51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:10.592411 containerd[1556]: time="2025-11-06T00:30:10.592367932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s88hx,Uid:d99fbda4-0f0c-421c-a518-a4c5a391c340,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbd213e6115bf1706a2a7700b805e65a78f4f29b55a792c2244572a706f5f51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:10.593193 kubelet[2741]: E1106 00:30:10.592776 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbd213e6115bf1706a2a7700b805e65a78f4f29b55a792c2244572a706f5f51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:10.593193 kubelet[2741]: E1106 00:30:10.592819 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbd213e6115bf1706a2a7700b805e65a78f4f29b55a792c2244572a706f5f51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s88hx" Nov 6 00:30:10.593193 kubelet[2741]: E1106 00:30:10.592834 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbd213e6115bf1706a2a7700b805e65a78f4f29b55a792c2244572a706f5f51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s88hx" Nov 6 00:30:10.593298 kubelet[2741]: E1106 00:30:10.592867 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cbd213e6115bf1706a2a7700b805e65a78f4f29b55a792c2244572a706f5f51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:10.595649 systemd[1]: run-netns-cni\x2df09432d6\x2d91cd\x2dfffd\x2d09f3\x2d7ee76f4743a0.mount: Deactivated successfully. Nov 6 00:30:10.604186 systemd[1]: Created slice kubepods-burstable-podd35ab5e8_6aeb_4e34_b248_eb58835d1058.slice - libcontainer container kubepods-burstable-podd35ab5e8_6aeb_4e34_b248_eb58835d1058.slice. Nov 6 00:30:10.608949 kubelet[2741]: E1106 00:30:10.608921 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:10.615207 containerd[1556]: time="2025-11-06T00:30:10.614972614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:30:10.618272 systemd[1]: Created slice kubepods-besteffort-pod456ae3b3_bf69_483d_be8f_52ebc676c862.slice - libcontainer container kubepods-besteffort-pod456ae3b3_bf69_483d_be8f_52ebc676c862.slice. Nov 6 00:30:10.628510 systemd[1]: Created slice kubepods-besteffort-pod0b8ecf15_79f1_43df_a5c1_419b36087e14.slice - libcontainer container kubepods-besteffort-pod0b8ecf15_79f1_43df_a5c1_419b36087e14.slice. Nov 6 00:30:10.639113 systemd[1]: Created slice kubepods-besteffort-podad57f6e7_ba94_4071_8188_eaaec8d179ad.slice - libcontainer container kubepods-besteffort-podad57f6e7_ba94_4071_8188_eaaec8d179ad.slice. Nov 6 00:30:10.645174 kubelet[2741]: I1106 00:30:10.644797 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4kj7\" (UniqueName: \"kubernetes.io/projected/456ae3b3-bf69-483d-be8f-52ebc676c862-kube-api-access-q4kj7\") pod \"calico-kube-controllers-f5f4d7b75-nsr7v\" (UID: \"456ae3b3-bf69-483d-be8f-52ebc676c862\") " pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" Nov 6 00:30:10.645174 kubelet[2741]: I1106 00:30:10.644825 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad57f6e7-ba94-4071-8188-eaaec8d179ad-calico-apiserver-certs\") pod \"calico-apiserver-765cfc7478-qx6xb\" (UID: \"ad57f6e7-ba94-4071-8188-eaaec8d179ad\") " pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" Nov 6 00:30:10.645174 kubelet[2741]: I1106 00:30:10.644885 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/456ae3b3-bf69-483d-be8f-52ebc676c862-tigera-ca-bundle\") pod \"calico-kube-controllers-f5f4d7b75-nsr7v\" (UID: \"456ae3b3-bf69-483d-be8f-52ebc676c862\") " pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" Nov 6 00:30:10.645174 kubelet[2741]: I1106 00:30:10.644910 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d35ab5e8-6aeb-4e34-b248-eb58835d1058-config-volume\") pod \"coredns-66bc5c9577-sm7cx\" (UID: \"d35ab5e8-6aeb-4e34-b248-eb58835d1058\") " pod="kube-system/coredns-66bc5c9577-sm7cx" Nov 6 00:30:10.645174 kubelet[2741]: I1106 00:30:10.644925 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm84\" (UniqueName: \"kubernetes.io/projected/ad57f6e7-ba94-4071-8188-eaaec8d179ad-kube-api-access-bvm84\") pod \"calico-apiserver-765cfc7478-qx6xb\" (UID: \"ad57f6e7-ba94-4071-8188-eaaec8d179ad\") " pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" Nov 6 00:30:10.645314 kubelet[2741]: I1106 00:30:10.644954 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbzqz\" (UniqueName: \"kubernetes.io/projected/d35ab5e8-6aeb-4e34-b248-eb58835d1058-kube-api-access-tbzqz\") pod \"coredns-66bc5c9577-sm7cx\" (UID: \"d35ab5e8-6aeb-4e34-b248-eb58835d1058\") " pod="kube-system/coredns-66bc5c9577-sm7cx" Nov 6 00:30:10.645314 kubelet[2741]: I1106 00:30:10.644967 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b8ecf15-79f1-43df-a5c1-419b36087e14-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-ggwql\" (UID: \"0b8ecf15-79f1-43df-a5c1-419b36087e14\") " pod="calico-system/goldmane-7c778bb748-ggwql" Nov 6 00:30:10.645314 kubelet[2741]: I1106 00:30:10.644982 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8ecf15-79f1-43df-a5c1-419b36087e14-config\") pod \"goldmane-7c778bb748-ggwql\" (UID: \"0b8ecf15-79f1-43df-a5c1-419b36087e14\") " pod="calico-system/goldmane-7c778bb748-ggwql" Nov 6 00:30:10.645314 kubelet[2741]: I1106 00:30:10.644996 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwkq\" (UniqueName: \"kubernetes.io/projected/0b8ecf15-79f1-43df-a5c1-419b36087e14-kube-api-access-2wwkq\") pod \"goldmane-7c778bb748-ggwql\" (UID: \"0b8ecf15-79f1-43df-a5c1-419b36087e14\") " pod="calico-system/goldmane-7c778bb748-ggwql" Nov 6 00:30:10.645314 kubelet[2741]: I1106 00:30:10.645026 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0b8ecf15-79f1-43df-a5c1-419b36087e14-goldmane-key-pair\") pod \"goldmane-7c778bb748-ggwql\" (UID: \"0b8ecf15-79f1-43df-a5c1-419b36087e14\") " pod="calico-system/goldmane-7c778bb748-ggwql" Nov 6 00:30:10.837264 containerd[1556]: time="2025-11-06T00:30:10.837217576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-h4bdp,Uid:b84c01a2-1d41-475e-8a7a-47755f9e00e7,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:30:10.890290 kubelet[2741]: E1106 00:30:10.889811 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:10.892505 containerd[1556]: time="2025-11-06T00:30:10.892461990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5gb5m,Uid:f8977de6-09a2-44c2-9dd7-aa57e7cd985b,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:10.910406 containerd[1556]: time="2025-11-06T00:30:10.910318006Z" level=error msg="Failed to destroy network for sandbox \"2a6e7a42e850a64567e79c2fc7792043112df4ee913b4e352cce51e1e6366ba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:10.912936 systemd[1]: run-netns-cni\x2d58cd3fbf\x2de810\x2d8e0e\x2d61e6\x2d849a1767dd6f.mount: Deactivated successfully. Nov 6 00:30:10.916405 containerd[1556]: time="2025-11-06T00:30:10.915809124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-h4bdp,Uid:b84c01a2-1d41-475e-8a7a-47755f9e00e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6e7a42e850a64567e79c2fc7792043112df4ee913b4e352cce51e1e6366ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:10.916997 kubelet[2741]: E1106 00:30:10.916841 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:10.918456 kubelet[2741]: E1106 00:30:10.917566 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6e7a42e850a64567e79c2fc7792043112df4ee913b4e352cce51e1e6366ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:10.918456 kubelet[2741]: E1106 00:30:10.917608 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6e7a42e850a64567e79c2fc7792043112df4ee913b4e352cce51e1e6366ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" Nov 6 00:30:10.918456 kubelet[2741]: E1106 00:30:10.917628 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6e7a42e850a64567e79c2fc7792043112df4ee913b4e352cce51e1e6366ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" Nov 6 00:30:10.918551 kubelet[2741]: E1106 00:30:10.917662 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-765cfc7478-h4bdp_calico-apiserver(b84c01a2-1d41-475e-8a7a-47755f9e00e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-765cfc7478-h4bdp_calico-apiserver(b84c01a2-1d41-475e-8a7a-47755f9e00e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a6e7a42e850a64567e79c2fc7792043112df4ee913b4e352cce51e1e6366ba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:30:10.918938 containerd[1556]: time="2025-11-06T00:30:10.918890873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sm7cx,Uid:d35ab5e8-6aeb-4e34-b248-eb58835d1058,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:10.929636 containerd[1556]: time="2025-11-06T00:30:10.929613853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5f4d7b75-nsr7v,Uid:456ae3b3-bf69-483d-be8f-52ebc676c862,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:10.939063 containerd[1556]: time="2025-11-06T00:30:10.939005923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ggwql,Uid:0b8ecf15-79f1-43df-a5c1-419b36087e14,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:10.947069 containerd[1556]: time="2025-11-06T00:30:10.947015174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-qx6xb,Uid:ad57f6e7-ba94-4071-8188-eaaec8d179ad,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:30:11.028414 containerd[1556]: time="2025-11-06T00:30:11.028376483Z" level=error msg="Failed to destroy network for sandbox \"c28943fa6f47fc668b87e1ace92135ecbd3b63ae046decc0600a1d46dc2adfed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.029630 containerd[1556]: time="2025-11-06T00:30:11.029603527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5gb5m,Uid:f8977de6-09a2-44c2-9dd7-aa57e7cd985b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28943fa6f47fc668b87e1ace92135ecbd3b63ae046decc0600a1d46dc2adfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.030510 kubelet[2741]: E1106 00:30:11.030462 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28943fa6f47fc668b87e1ace92135ecbd3b63ae046decc0600a1d46dc2adfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.030569 kubelet[2741]: E1106 00:30:11.030528 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28943fa6f47fc668b87e1ace92135ecbd3b63ae046decc0600a1d46dc2adfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5gb5m" Nov 6 00:30:11.030569 kubelet[2741]: E1106 00:30:11.030551 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28943fa6f47fc668b87e1ace92135ecbd3b63ae046decc0600a1d46dc2adfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5gb5m" Nov 6 00:30:11.030628 kubelet[2741]: E1106 00:30:11.030607 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5gb5m_kube-system(f8977de6-09a2-44c2-9dd7-aa57e7cd985b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5gb5m_kube-system(f8977de6-09a2-44c2-9dd7-aa57e7cd985b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c28943fa6f47fc668b87e1ace92135ecbd3b63ae046decc0600a1d46dc2adfed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5gb5m" podUID="f8977de6-09a2-44c2-9dd7-aa57e7cd985b" Nov 6 00:30:11.058327 containerd[1556]: time="2025-11-06T00:30:11.058258973Z" level=error msg="Failed to destroy network for sandbox \"496edc34afc887d56fa1cfa5a6714551d13971574e1eeed71366da0245c670d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.059171 containerd[1556]: time="2025-11-06T00:30:11.059021043Z" level=error msg="Failed to destroy network for sandbox \"b8bacbd7a0134cc79d504590076c64fa50291b0d4208d3ab8355193ff0620fb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.063370 containerd[1556]: time="2025-11-06T00:30:11.063090933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5f4d7b75-nsr7v,Uid:456ae3b3-bf69-483d-be8f-52ebc676c862,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bacbd7a0134cc79d504590076c64fa50291b0d4208d3ab8355193ff0620fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.064622 kubelet[2741]: E1106 00:30:11.063613 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bacbd7a0134cc79d504590076c64fa50291b0d4208d3ab8355193ff0620fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.064622 kubelet[2741]: E1106 00:30:11.063689 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bacbd7a0134cc79d504590076c64fa50291b0d4208d3ab8355193ff0620fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" Nov 6 00:30:11.064622 kubelet[2741]: E1106 00:30:11.063714 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bacbd7a0134cc79d504590076c64fa50291b0d4208d3ab8355193ff0620fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" Nov 6 00:30:11.064725 containerd[1556]: time="2025-11-06T00:30:11.063676070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ggwql,Uid:0b8ecf15-79f1-43df-a5c1-419b36087e14,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"496edc34afc887d56fa1cfa5a6714551d13971574e1eeed71366da0245c670d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.064785 kubelet[2741]: E1106 00:30:11.063762 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f5f4d7b75-nsr7v_calico-system(456ae3b3-bf69-483d-be8f-52ebc676c862)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f5f4d7b75-nsr7v_calico-system(456ae3b3-bf69-483d-be8f-52ebc676c862)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8bacbd7a0134cc79d504590076c64fa50291b0d4208d3ab8355193ff0620fb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:30:11.064785 kubelet[2741]: E1106 00:30:11.064100 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496edc34afc887d56fa1cfa5a6714551d13971574e1eeed71366da0245c670d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.064785 kubelet[2741]: E1106 00:30:11.064127 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496edc34afc887d56fa1cfa5a6714551d13971574e1eeed71366da0245c670d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-ggwql" Nov 6 00:30:11.064891 kubelet[2741]: E1106 00:30:11.064828 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496edc34afc887d56fa1cfa5a6714551d13971574e1eeed71366da0245c670d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-ggwql" Nov 6 00:30:11.064891 kubelet[2741]: E1106 00:30:11.064873 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-ggwql_calico-system(0b8ecf15-79f1-43df-a5c1-419b36087e14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-ggwql_calico-system(0b8ecf15-79f1-43df-a5c1-419b36087e14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"496edc34afc887d56fa1cfa5a6714551d13971574e1eeed71366da0245c670d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:30:11.074640 containerd[1556]: time="2025-11-06T00:30:11.074615786Z" level=error msg="Failed to destroy network for sandbox \"17dd04d9e2c87568007baaea74e58c091561b4a34c2883d56e8ea0f27a88cdd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.075823 containerd[1556]: time="2025-11-06T00:30:11.075797698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sm7cx,Uid:d35ab5e8-6aeb-4e34-b248-eb58835d1058,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17dd04d9e2c87568007baaea74e58c091561b4a34c2883d56e8ea0f27a88cdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.076047 kubelet[2741]: E1106 00:30:11.076026 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17dd04d9e2c87568007baaea74e58c091561b4a34c2883d56e8ea0f27a88cdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.076193 kubelet[2741]: E1106 00:30:11.076113 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17dd04d9e2c87568007baaea74e58c091561b4a34c2883d56e8ea0f27a88cdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sm7cx" Nov 6 00:30:11.076193 kubelet[2741]: E1106 00:30:11.076132 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17dd04d9e2c87568007baaea74e58c091561b4a34c2883d56e8ea0f27a88cdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sm7cx" Nov 6 00:30:11.076492 kubelet[2741]: E1106 00:30:11.076426 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sm7cx_kube-system(d35ab5e8-6aeb-4e34-b248-eb58835d1058)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sm7cx_kube-system(d35ab5e8-6aeb-4e34-b248-eb58835d1058)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17dd04d9e2c87568007baaea74e58c091561b4a34c2883d56e8ea0f27a88cdd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sm7cx" podUID="d35ab5e8-6aeb-4e34-b248-eb58835d1058" Nov 6 00:30:11.083855 containerd[1556]: time="2025-11-06T00:30:11.083791034Z" level=error msg="Failed to destroy network for sandbox \"7573862b7fc527c66a72acf0d49f79da57153e737120ff3b485ab09685e98307\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.084690 containerd[1556]: time="2025-11-06T00:30:11.084646857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-qx6xb,Uid:ad57f6e7-ba94-4071-8188-eaaec8d179ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7573862b7fc527c66a72acf0d49f79da57153e737120ff3b485ab09685e98307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.085643 kubelet[2741]: E1106 00:30:11.084806 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7573862b7fc527c66a72acf0d49f79da57153e737120ff3b485ab09685e98307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:11.085643 kubelet[2741]: E1106 00:30:11.084831 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7573862b7fc527c66a72acf0d49f79da57153e737120ff3b485ab09685e98307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" Nov 6 00:30:11.085643 kubelet[2741]: E1106 00:30:11.084845 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7573862b7fc527c66a72acf0d49f79da57153e737120ff3b485ab09685e98307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" Nov 6 00:30:11.085910 kubelet[2741]: E1106 00:30:11.084882 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-765cfc7478-qx6xb_calico-apiserver(ad57f6e7-ba94-4071-8188-eaaec8d179ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-765cfc7478-qx6xb_calico-apiserver(ad57f6e7-ba94-4071-8188-eaaec8d179ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7573862b7fc527c66a72acf0d49f79da57153e737120ff3b485ab09685e98307\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:30:11.646960 kubelet[2741]: E1106 00:30:11.646833 2741 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Nov 6 00:30:11.646960 kubelet[2741]: E1106 00:30:11.646963 2741 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16b4278-934a-48e0-9503-bce8310d9168-whisker-backend-key-pair podName:b16b4278-934a-48e0-9503-bce8310d9168 nodeName:}" failed. No retries permitted until 2025-11-06 00:30:12.146940704 +0000 UTC m=+29.837547708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/b16b4278-934a-48e0-9503-bce8310d9168-whisker-backend-key-pair") pod "whisker-5fd49bfb84-zzvkv" (UID: "b16b4278-934a-48e0-9503-bce8310d9168") : failed to sync secret cache: timed out waiting for the condition Nov 6 00:30:11.778350 systemd[1]: run-netns-cni\x2dd9f8e5f5\x2de544\x2d7575\x2dd1ef\x2de4296b19bce6.mount: Deactivated successfully. Nov 6 00:30:11.778584 systemd[1]: run-netns-cni\x2d425e53db\x2d9b91\x2d3dae\x2d8413\x2d7d9a97f0e1c2.mount: Deactivated successfully. Nov 6 00:30:11.877657 kubelet[2741]: I1106 00:30:11.877402 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:30:11.878353 kubelet[2741]: E1106 00:30:11.877885 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:12.372179 containerd[1556]: time="2025-11-06T00:30:12.372100734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd49bfb84-zzvkv,Uid:b16b4278-934a-48e0-9503-bce8310d9168,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:12.445200 containerd[1556]: time="2025-11-06T00:30:12.445054729Z" level=error msg="Failed to destroy network for sandbox \"01b78efccc1394244e53e7412d4c77b43f00b853eab9dc0978b89030fdd5535e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:12.447555 containerd[1556]: time="2025-11-06T00:30:12.447518922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd49bfb84-zzvkv,Uid:b16b4278-934a-48e0-9503-bce8310d9168,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b78efccc1394244e53e7412d4c77b43f00b853eab9dc0978b89030fdd5535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:12.449075 kubelet[2741]: E1106 00:30:12.449023 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b78efccc1394244e53e7412d4c77b43f00b853eab9dc0978b89030fdd5535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:30:12.449139 kubelet[2741]: E1106 00:30:12.449090 2741 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b78efccc1394244e53e7412d4c77b43f00b853eab9dc0978b89030fdd5535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fd49bfb84-zzvkv" Nov 6 00:30:12.449139 kubelet[2741]: E1106 00:30:12.449114 2741 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b78efccc1394244e53e7412d4c77b43f00b853eab9dc0978b89030fdd5535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fd49bfb84-zzvkv" Nov 6 00:30:12.449228 kubelet[2741]: E1106 00:30:12.449200 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fd49bfb84-zzvkv_calico-system(b16b4278-934a-48e0-9503-bce8310d9168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fd49bfb84-zzvkv_calico-system(b16b4278-934a-48e0-9503-bce8310d9168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01b78efccc1394244e53e7412d4c77b43f00b853eab9dc0978b89030fdd5535e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fd49bfb84-zzvkv" podUID="b16b4278-934a-48e0-9503-bce8310d9168" Nov 6 00:30:12.450200 systemd[1]: run-netns-cni\x2d0ca2fa57\x2d38bd\x2d8a62\x2d5fba\x2d35dbce22fc47.mount: Deactivated successfully. Nov 6 00:30:12.616743 kubelet[2741]: E1106 00:30:12.616581 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:15.393528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359352811.mount: Deactivated successfully. Nov 6 00:30:15.422119 containerd[1556]: time="2025-11-06T00:30:15.422050355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:15.423179 containerd[1556]: time="2025-11-06T00:30:15.422914314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:30:15.425282 containerd[1556]: time="2025-11-06T00:30:15.425252613Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:15.428329 containerd[1556]: time="2025-11-06T00:30:15.428303087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:15.428580 containerd[1556]: time="2025-11-06T00:30:15.428557323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.813553438s" Nov 6 00:30:15.428651 containerd[1556]: time="2025-11-06T00:30:15.428633124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:30:15.450392 containerd[1556]: time="2025-11-06T00:30:15.450353690Z" level=info msg="CreateContainer within sandbox \"07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:30:15.459371 containerd[1556]: time="2025-11-06T00:30:15.459337560Z" level=info msg="Container b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:15.468065 containerd[1556]: time="2025-11-06T00:30:15.468035753Z" level=info msg="CreateContainer within sandbox \"07235b4347f09c17a6455e78ffcfaf311b17b4535fc687befdbed5a8269c8b3c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\"" Nov 6 00:30:15.468872 containerd[1556]: time="2025-11-06T00:30:15.468812139Z" level=info msg="StartContainer for \"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\"" Nov 6 00:30:15.473058 containerd[1556]: time="2025-11-06T00:30:15.472979457Z" level=info msg="connecting to shim b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3" address="unix:///run/containerd/s/cfdbaffe03447f47cf9850707de7145238fa916027ef8a21fc41b33deee9be30" protocol=ttrpc version=3 Nov 6 00:30:15.533288 systemd[1]: Started cri-containerd-b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3.scope - libcontainer container b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3. Nov 6 00:30:15.604434 containerd[1556]: time="2025-11-06T00:30:15.604386710Z" level=info msg="StartContainer for \"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" returns successfully" Nov 6 00:30:15.647430 kubelet[2741]: E1106 00:30:15.647310 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:15.669562 kubelet[2741]: I1106 00:30:15.669519 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hdlwj" podStartSLOduration=0.937301694 podStartE2EDuration="11.66950863s" podCreationTimestamp="2025-11-06 00:30:04 +0000 UTC" firstStartedPulling="2025-11-06 00:30:04.697734925 +0000 UTC m=+22.388341939" lastFinishedPulling="2025-11-06 00:30:15.429941871 +0000 UTC m=+33.120548875" observedRunningTime="2025-11-06 00:30:15.667823175 +0000 UTC m=+33.358430179" watchObservedRunningTime="2025-11-06 00:30:15.66950863 +0000 UTC m=+33.360115634" Nov 6 00:30:15.714880 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:30:15.714987 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:30:15.891338 kubelet[2741]: I1106 00:30:15.890335 2741 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b16b4278-934a-48e0-9503-bce8310d9168-whisker-ca-bundle\") pod \"b16b4278-934a-48e0-9503-bce8310d9168\" (UID: \"b16b4278-934a-48e0-9503-bce8310d9168\") " Nov 6 00:30:15.891338 kubelet[2741]: I1106 00:30:15.890372 2741 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b16b4278-934a-48e0-9503-bce8310d9168-whisker-backend-key-pair\") pod \"b16b4278-934a-48e0-9503-bce8310d9168\" (UID: \"b16b4278-934a-48e0-9503-bce8310d9168\") " Nov 6 00:30:15.891338 kubelet[2741]: I1106 00:30:15.890390 2741 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkm8q\" (UniqueName: \"kubernetes.io/projected/b16b4278-934a-48e0-9503-bce8310d9168-kube-api-access-pkm8q\") pod \"b16b4278-934a-48e0-9503-bce8310d9168\" (UID: \"b16b4278-934a-48e0-9503-bce8310d9168\") " Nov 6 00:30:15.891338 kubelet[2741]: I1106 00:30:15.890764 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16b4278-934a-48e0-9503-bce8310d9168-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b16b4278-934a-48e0-9503-bce8310d9168" (UID: "b16b4278-934a-48e0-9503-bce8310d9168"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:30:15.899338 kubelet[2741]: I1106 00:30:15.899277 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16b4278-934a-48e0-9503-bce8310d9168-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b16b4278-934a-48e0-9503-bce8310d9168" (UID: "b16b4278-934a-48e0-9503-bce8310d9168"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:30:15.899430 kubelet[2741]: I1106 00:30:15.899279 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16b4278-934a-48e0-9503-bce8310d9168-kube-api-access-pkm8q" (OuterVolumeSpecName: "kube-api-access-pkm8q") pod "b16b4278-934a-48e0-9503-bce8310d9168" (UID: "b16b4278-934a-48e0-9503-bce8310d9168"). InnerVolumeSpecName "kube-api-access-pkm8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:30:15.990898 kubelet[2741]: I1106 00:30:15.990762 2741 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b16b4278-934a-48e0-9503-bce8310d9168-whisker-backend-key-pair\") on node \"172-232-1-216\" DevicePath \"\"" Nov 6 00:30:15.990898 kubelet[2741]: I1106 00:30:15.990819 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pkm8q\" (UniqueName: \"kubernetes.io/projected/b16b4278-934a-48e0-9503-bce8310d9168-kube-api-access-pkm8q\") on node \"172-232-1-216\" DevicePath \"\"" Nov 6 00:30:15.990898 kubelet[2741]: I1106 00:30:15.990832 2741 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b16b4278-934a-48e0-9503-bce8310d9168-whisker-ca-bundle\") on node \"172-232-1-216\" DevicePath \"\"" Nov 6 00:30:16.332967 containerd[1556]: time="2025-11-06T00:30:16.332907095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"473995de8294436c7cbe6591502dff1d409085fc4f44a05582c4fd1f99e67e26\" pid:3877 exit_status:1 exited_at:{seconds:1762389016 nanos:332083879}" Nov 6 00:30:16.393606 systemd[1]: var-lib-kubelet-pods-b16b4278\x2d934a\x2d48e0\x2d9503\x2dbce8310d9168-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:30:16.393738 systemd[1]: var-lib-kubelet-pods-b16b4278\x2d934a\x2d48e0\x2d9503\x2dbce8310d9168-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpkm8q.mount: Deactivated successfully. Nov 6 00:30:16.428201 containerd[1556]: time="2025-11-06T00:30:16.428128736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"d63173ae0b56c5a5ef1bfbfacea27cd09f3bbc3bf56f4719a29c45474b5b56fa\" pid:3903 exit_status:1 exited_at:{seconds:1762389016 nanos:427751559}" Nov 6 00:30:16.470661 systemd[1]: Removed slice kubepods-besteffort-podb16b4278_934a_48e0_9503_bce8310d9168.slice - libcontainer container kubepods-besteffort-podb16b4278_934a_48e0_9503_bce8310d9168.slice. Nov 6 00:30:16.649572 kubelet[2741]: E1106 00:30:16.649462 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:16.726190 systemd[1]: Created slice kubepods-besteffort-pod5fa7c5d6_8a14_4c07_a175_2cee4871f07f.slice - libcontainer container kubepods-besteffort-pod5fa7c5d6_8a14_4c07_a175_2cee4871f07f.slice. Nov 6 00:30:16.794317 containerd[1556]: time="2025-11-06T00:30:16.794260867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"5e853893f3b878c45d475dd6bd726b1f59aaecf2b9b9871a2a8fd906b4f6f8f2\" pid:3929 exit_status:1 exited_at:{seconds:1762389016 nanos:793928001}" Nov 6 00:30:16.800752 kubelet[2741]: I1106 00:30:16.800445 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fa7c5d6-8a14-4c07-a175-2cee4871f07f-whisker-ca-bundle\") pod \"whisker-669d4994db-c2xm2\" (UID: \"5fa7c5d6-8a14-4c07-a175-2cee4871f07f\") " pod="calico-system/whisker-669d4994db-c2xm2" Nov 6 00:30:16.800752 kubelet[2741]: I1106 00:30:16.800488 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fa7c5d6-8a14-4c07-a175-2cee4871f07f-whisker-backend-key-pair\") pod \"whisker-669d4994db-c2xm2\" (UID: \"5fa7c5d6-8a14-4c07-a175-2cee4871f07f\") " pod="calico-system/whisker-669d4994db-c2xm2" Nov 6 00:30:16.800752 kubelet[2741]: I1106 00:30:16.800697 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgqtt\" (UniqueName: \"kubernetes.io/projected/5fa7c5d6-8a14-4c07-a175-2cee4871f07f-kube-api-access-pgqtt\") pod \"whisker-669d4994db-c2xm2\" (UID: \"5fa7c5d6-8a14-4c07-a175-2cee4871f07f\") " pod="calico-system/whisker-669d4994db-c2xm2" Nov 6 00:30:17.037075 containerd[1556]: time="2025-11-06T00:30:17.036932936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669d4994db-c2xm2,Uid:5fa7c5d6-8a14-4c07-a175-2cee4871f07f,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:17.188563 systemd-networkd[1449]: cali96ecd3f2b63: Link UP Nov 6 00:30:17.190296 systemd-networkd[1449]: cali96ecd3f2b63: Gained carrier Nov 6 00:30:17.208965 containerd[1556]: 2025-11-06 00:30:17.063 [INFO][3945] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:30:17.208965 containerd[1556]: 2025-11-06 00:30:17.101 [INFO][3945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0 whisker-669d4994db- calico-system 5fa7c5d6-8a14-4c07-a175-2cee4871f07f 951 0 2025-11-06 00:30:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:669d4994db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-1-216 whisker-669d4994db-c2xm2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali96ecd3f2b63 [] [] }} ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-" Nov 6 00:30:17.208965 containerd[1556]: 2025-11-06 00:30:17.102 [INFO][3945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" Nov 6 00:30:17.208965 containerd[1556]: 2025-11-06 00:30:17.132 [INFO][3956] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" HandleID="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Workload="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.133 [INFO][3956] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" HandleID="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Workload="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-1-216", "pod":"whisker-669d4994db-c2xm2", "timestamp":"2025-11-06 00:30:17.132917886 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.133 [INFO][3956] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.133 [INFO][3956] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.134 [INFO][3956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.141 [INFO][3956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" host="172-232-1-216" Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.147 [INFO][3956] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.151 [INFO][3956] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.153 [INFO][3956] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.156 [INFO][3956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:17.209447 containerd[1556]: 2025-11-06 00:30:17.156 [INFO][3956] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" host="172-232-1-216" Nov 6 00:30:17.209671 containerd[1556]: 2025-11-06 00:30:17.157 [INFO][3956] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8 Nov 6 00:30:17.209671 containerd[1556]: 2025-11-06 00:30:17.162 [INFO][3956] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" host="172-232-1-216" Nov 6 00:30:17.209671 containerd[1556]: 2025-11-06 00:30:17.166 [INFO][3956] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.129/26] block=192.168.98.128/26 handle="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" host="172-232-1-216" Nov 6 00:30:17.209671 containerd[1556]: 2025-11-06 00:30:17.167 [INFO][3956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.129/26] handle="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" host="172-232-1-216" Nov 6 00:30:17.209671 containerd[1556]: 2025-11-06 00:30:17.167 [INFO][3956] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:17.209671 containerd[1556]: 2025-11-06 00:30:17.167 [INFO][3956] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.129/26] IPv6=[] ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" HandleID="k8s-pod-network.7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Workload="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" Nov 6 00:30:17.209784 containerd[1556]: 2025-11-06 00:30:17.173 [INFO][3945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0", GenerateName:"whisker-669d4994db-", Namespace:"calico-system", SelfLink:"", UID:"5fa7c5d6-8a14-4c07-a175-2cee4871f07f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669d4994db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"whisker-669d4994db-c2xm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96ecd3f2b63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:17.209784 containerd[1556]: 2025-11-06 00:30:17.173 [INFO][3945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.129/32] ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" Nov 6 00:30:17.209863 containerd[1556]: 2025-11-06 00:30:17.173 [INFO][3945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96ecd3f2b63 ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" Nov 6 00:30:17.209863 containerd[1556]: 2025-11-06 00:30:17.192 [INFO][3945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" Nov 6 00:30:17.209907 containerd[1556]: 2025-11-06 00:30:17.192 [INFO][3945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0", GenerateName:"whisker-669d4994db-", Namespace:"calico-system", SelfLink:"", UID:"5fa7c5d6-8a14-4c07-a175-2cee4871f07f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"669d4994db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8", Pod:"whisker-669d4994db-c2xm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96ecd3f2b63", MAC:"b2:9a:2f:5c:aa:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:17.209967 containerd[1556]: 2025-11-06 00:30:17.204 [INFO][3945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" Namespace="calico-system" Pod="whisker-669d4994db-c2xm2" WorkloadEndpoint="172--232--1--216-k8s-whisker--669d4994db--c2xm2-eth0" Nov 6 00:30:17.270394 containerd[1556]: time="2025-11-06T00:30:17.270300044Z" level=info msg="connecting to shim 7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8" address="unix:///run/containerd/s/13d58cb0e5b1ca0adabb614a904e0b55767b69b653dc58be93ff5656f600b2ea" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:17.337377 systemd[1]: Started cri-containerd-7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8.scope - libcontainer container 7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8. Nov 6 00:30:17.523251 containerd[1556]: time="2025-11-06T00:30:17.523108062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669d4994db-c2xm2,Uid:5fa7c5d6-8a14-4c07-a175-2cee4871f07f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a08e72c2daeab1d45dcd9f76191bd78a64fb24b19cc70e39310885a1c4e5ab8\"" Nov 6 00:30:17.527620 containerd[1556]: time="2025-11-06T00:30:17.527468263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:30:17.658589 kubelet[2741]: E1106 00:30:17.657524 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:17.672295 containerd[1556]: time="2025-11-06T00:30:17.672186098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:17.674034 containerd[1556]: time="2025-11-06T00:30:17.673442500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:30:17.674034 containerd[1556]: time="2025-11-06T00:30:17.673505121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:30:17.674781 kubelet[2741]: E1106 00:30:17.674245 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:17.674781 kubelet[2741]: E1106 00:30:17.674280 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:17.674781 kubelet[2741]: E1106 00:30:17.674347 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:17.682988 containerd[1556]: time="2025-11-06T00:30:17.682968117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:30:17.790969 containerd[1556]: time="2025-11-06T00:30:17.790933400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"45900ea1ef7c373412b25cf7a8472b7c63d4223ceebb381c5f8589eeba826403\" pid:4153 exit_status:1 exited_at:{seconds:1762389017 nanos:790441971}" Nov 6 00:30:17.825351 containerd[1556]: time="2025-11-06T00:30:17.825213245Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:17.826469 containerd[1556]: time="2025-11-06T00:30:17.826424438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:30:17.826798 containerd[1556]: time="2025-11-06T00:30:17.826537120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:30:17.826974 kubelet[2741]: E1106 00:30:17.826913 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:17.827127 kubelet[2741]: E1106 00:30:17.826948 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:17.828020 kubelet[2741]: E1106 00:30:17.827268 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:17.828020 kubelet[2741]: E1106 00:30:17.827982 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:30:17.992409 systemd-networkd[1449]: vxlan.calico: Link UP Nov 6 00:30:17.992417 systemd-networkd[1449]: vxlan.calico: Gained carrier Nov 6 00:30:18.468365 kubelet[2741]: I1106 00:30:18.467449 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16b4278-934a-48e0-9503-bce8310d9168" path="/var/lib/kubelet/pods/b16b4278-934a-48e0-9503-bce8310d9168/volumes" Nov 6 00:30:18.584445 systemd-networkd[1449]: cali96ecd3f2b63: Gained IPv6LL Nov 6 00:30:18.660699 kubelet[2741]: E1106 00:30:18.659140 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:30:19.032369 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Nov 6 00:30:21.465827 containerd[1556]: time="2025-11-06T00:30:21.465762923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ggwql,Uid:0b8ecf15-79f1-43df-a5c1-419b36087e14,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:21.571854 systemd-networkd[1449]: cali577e7b8a7a2: Link UP Nov 6 00:30:21.573429 systemd-networkd[1449]: cali577e7b8a7a2: Gained carrier Nov 6 00:30:21.590140 containerd[1556]: 2025-11-06 00:30:21.504 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0 goldmane-7c778bb748- calico-system 0b8ecf15-79f1-43df-a5c1-419b36087e14 871 0 2025-11-06 00:30:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-1-216 goldmane-7c778bb748-ggwql eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali577e7b8a7a2 [] [] }} ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-" Nov 6 00:30:21.590140 containerd[1556]: 2025-11-06 00:30:21.504 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" Nov 6 00:30:21.590140 containerd[1556]: 2025-11-06 00:30:21.529 [INFO][4251] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" HandleID="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Workload="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.529 [INFO][4251] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" HandleID="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Workload="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-1-216", "pod":"goldmane-7c778bb748-ggwql", "timestamp":"2025-11-06 00:30:21.529564884 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.529 [INFO][4251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.529 [INFO][4251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.529 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.535 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" host="172-232-1-216" Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.539 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.548 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.550 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.552 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:21.590365 containerd[1556]: 2025-11-06 00:30:21.552 [INFO][4251] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" host="172-232-1-216" Nov 6 00:30:21.590582 containerd[1556]: 2025-11-06 00:30:21.553 [INFO][4251] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9 Nov 6 00:30:21.590582 containerd[1556]: 2025-11-06 00:30:21.557 [INFO][4251] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" host="172-232-1-216" Nov 6 00:30:21.590582 containerd[1556]: 2025-11-06 00:30:21.563 [INFO][4251] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.130/26] block=192.168.98.128/26 handle="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" host="172-232-1-216" Nov 6 00:30:21.590582 containerd[1556]: 2025-11-06 00:30:21.563 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.130/26] handle="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" host="172-232-1-216" Nov 6 00:30:21.590582 containerd[1556]: 2025-11-06 00:30:21.563 [INFO][4251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:21.590582 containerd[1556]: 2025-11-06 00:30:21.563 [INFO][4251] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.130/26] IPv6=[] ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" HandleID="k8s-pod-network.2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Workload="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" Nov 6 00:30:21.590769 containerd[1556]: 2025-11-06 00:30:21.567 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0b8ecf15-79f1-43df-a5c1-419b36087e14", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"goldmane-7c778bb748-ggwql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali577e7b8a7a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:21.590769 containerd[1556]: 2025-11-06 00:30:21.567 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.130/32] ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" Nov 6 00:30:21.590861 containerd[1556]: 2025-11-06 00:30:21.567 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali577e7b8a7a2 ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" Nov 6 00:30:21.590861 containerd[1556]: 2025-11-06 00:30:21.574 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" Nov 6 00:30:21.590909 containerd[1556]: 2025-11-06 00:30:21.574 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0b8ecf15-79f1-43df-a5c1-419b36087e14", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9", Pod:"goldmane-7c778bb748-ggwql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali577e7b8a7a2", MAC:"92:67:91:23:0c:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:21.590995 containerd[1556]: 2025-11-06 00:30:21.582 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" Namespace="calico-system" Pod="goldmane-7c778bb748-ggwql" WorkloadEndpoint="172--232--1--216-k8s-goldmane--7c778bb748--ggwql-eth0" Nov 6 00:30:21.617343 containerd[1556]: time="2025-11-06T00:30:21.617295061Z" level=info msg="connecting to shim 2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9" address="unix:///run/containerd/s/b1a1a60488b1602947765ea6c1c8a2d8cda8c11ad9373a2ae0ba99f0ef3ed099" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:21.651288 systemd[1]: Started cri-containerd-2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9.scope - libcontainer container 2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9. Nov 6 00:30:21.700948 containerd[1556]: time="2025-11-06T00:30:21.700893427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ggwql,Uid:0b8ecf15-79f1-43df-a5c1-419b36087e14,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a05344edaca8441101dd68f38d9e94302072d50045ee46a4f8b5d7663ff5be9\"" Nov 6 00:30:21.704081 containerd[1556]: time="2025-11-06T00:30:21.703323813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:30:21.840485 containerd[1556]: time="2025-11-06T00:30:21.840411342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:21.841947 containerd[1556]: time="2025-11-06T00:30:21.841870543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:30:21.842008 containerd[1556]: time="2025-11-06T00:30:21.841871653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:21.842298 kubelet[2741]: E1106 00:30:21.842243 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:30:21.843038 kubelet[2741]: E1106 00:30:21.842624 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:30:21.843038 kubelet[2741]: E1106 00:30:21.842711 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-ggwql_calico-system(0b8ecf15-79f1-43df-a5c1-419b36087e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:21.843038 kubelet[2741]: E1106 00:30:21.842746 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:30:22.464535 kubelet[2741]: E1106 00:30:22.464459 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:22.469328 containerd[1556]: time="2025-11-06T00:30:22.469019922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5gb5m,Uid:f8977de6-09a2-44c2-9dd7-aa57e7cd985b,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:22.469804 kubelet[2741]: E1106 00:30:22.469228 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:22.470535 containerd[1556]: time="2025-11-06T00:30:22.470514062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sm7cx,Uid:d35ab5e8-6aeb-4e34-b248-eb58835d1058,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:22.632252 systemd-networkd[1449]: calif638341444b: Link UP Nov 6 00:30:22.634648 systemd-networkd[1449]: calif638341444b: Gained carrier Nov 6 00:30:22.652558 containerd[1556]: 2025-11-06 00:30:22.530 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0 coredns-66bc5c9577- kube-system d35ab5e8-6aeb-4e34-b248-eb58835d1058 873 0 2025-11-06 00:29:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-1-216 coredns-66bc5c9577-sm7cx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif638341444b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-" Nov 6 00:30:22.652558 containerd[1556]: 2025-11-06 00:30:22.531 [INFO][4316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" Nov 6 00:30:22.652558 containerd[1556]: 2025-11-06 00:30:22.573 [INFO][4340] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" HandleID="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Workload="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.573 [INFO][4340] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" HandleID="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Workload="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-1-216", "pod":"coredns-66bc5c9577-sm7cx", "timestamp":"2025-11-06 00:30:22.573488249 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.573 [INFO][4340] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.573 [INFO][4340] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.573 [INFO][4340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.582 [INFO][4340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" host="172-232-1-216" Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.588 [INFO][4340] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.593 [INFO][4340] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.595 [INFO][4340] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.597 [INFO][4340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:22.652736 containerd[1556]: 2025-11-06 00:30:22.597 [INFO][4340] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" host="172-232-1-216" Nov 6 00:30:22.652968 containerd[1556]: 2025-11-06 00:30:22.599 [INFO][4340] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7 Nov 6 00:30:22.652968 containerd[1556]: 2025-11-06 00:30:22.606 [INFO][4340] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" host="172-232-1-216" Nov 6 00:30:22.652968 containerd[1556]: 2025-11-06 00:30:22.614 [INFO][4340] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.131/26] block=192.168.98.128/26 handle="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" host="172-232-1-216" Nov 6 00:30:22.652968 containerd[1556]: 2025-11-06 00:30:22.614 [INFO][4340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.131/26] handle="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" host="172-232-1-216" Nov 6 00:30:22.652968 containerd[1556]: 2025-11-06 00:30:22.614 [INFO][4340] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:22.652968 containerd[1556]: 2025-11-06 00:30:22.614 [INFO][4340] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.131/26] IPv6=[] ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" HandleID="k8s-pod-network.2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Workload="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" Nov 6 00:30:22.653082 containerd[1556]: 2025-11-06 00:30:22.619 [INFO][4316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d35ab5e8-6aeb-4e34-b248-eb58835d1058", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"coredns-66bc5c9577-sm7cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif638341444b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:22.653082 containerd[1556]: 2025-11-06 00:30:22.619 [INFO][4316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.131/32] ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" Nov 6 00:30:22.653082 containerd[1556]: 2025-11-06 00:30:22.619 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif638341444b ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" Nov 6 00:30:22.653082 containerd[1556]: 2025-11-06 00:30:22.636 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" Nov 6 00:30:22.653082 containerd[1556]: 2025-11-06 00:30:22.638 [INFO][4316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d35ab5e8-6aeb-4e34-b248-eb58835d1058", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7", Pod:"coredns-66bc5c9577-sm7cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif638341444b", MAC:"c2:e3:13:13:e2:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:22.653082 containerd[1556]: 2025-11-06 00:30:22.648 [INFO][4316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" Namespace="kube-system" Pod="coredns-66bc5c9577-sm7cx" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--sm7cx-eth0" Nov 6 00:30:22.678182 kubelet[2741]: E1106 00:30:22.674115 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:30:22.752345 containerd[1556]: time="2025-11-06T00:30:22.752027852Z" level=info msg="connecting to shim 2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7" address="unix:///run/containerd/s/64476675bd0ea1ee5b9864eb22d8ad6cff5d173e897dfcf70bad94a9067e8753" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:22.806766 systemd-networkd[1449]: cali12138f76c31: Link UP Nov 6 00:30:22.809448 systemd-networkd[1449]: cali12138f76c31: Gained carrier Nov 6 00:30:22.828492 systemd[1]: Started cri-containerd-2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7.scope - libcontainer container 2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7. Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.537 [INFO][4315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0 coredns-66bc5c9577- kube-system f8977de6-09a2-44c2-9dd7-aa57e7cd985b 874 0 2025-11-06 00:29:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-1-216 coredns-66bc5c9577-5gb5m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12138f76c31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.538 [INFO][4315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.586 [INFO][4345] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" HandleID="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Workload="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.588 [INFO][4345] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" HandleID="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Workload="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-1-216", "pod":"coredns-66bc5c9577-5gb5m", "timestamp":"2025-11-06 00:30:22.586641368 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.588 [INFO][4345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.615 [INFO][4345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.615 [INFO][4345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.687 [INFO][4345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.712 [INFO][4345] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.742 [INFO][4345] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.746 [INFO][4345] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.749 [INFO][4345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.749 [INFO][4345] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.750 [INFO][4345] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.758 [INFO][4345] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.771 [INFO][4345] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.132/26] block=192.168.98.128/26 handle="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.771 [INFO][4345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.132/26] handle="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" host="172-232-1-216" Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.771 [INFO][4345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:22.838519 containerd[1556]: 2025-11-06 00:30:22.771 [INFO][4345] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.132/26] IPv6=[] ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" HandleID="k8s-pod-network.9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Workload="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" Nov 6 00:30:22.839662 containerd[1556]: 2025-11-06 00:30:22.785 [INFO][4315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f8977de6-09a2-44c2-9dd7-aa57e7cd985b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"coredns-66bc5c9577-5gb5m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12138f76c31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:22.839662 containerd[1556]: 2025-11-06 00:30:22.785 [INFO][4315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.132/32] ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" Nov 6 00:30:22.839662 containerd[1556]: 2025-11-06 00:30:22.785 [INFO][4315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12138f76c31 ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" Nov 6 00:30:22.839662 containerd[1556]: 2025-11-06 00:30:22.808 [INFO][4315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" Nov 6 00:30:22.839662 containerd[1556]: 2025-11-06 00:30:22.808 [INFO][4315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f8977de6-09a2-44c2-9dd7-aa57e7cd985b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d", Pod:"coredns-66bc5c9577-5gb5m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12138f76c31", MAC:"36:4c:d6:15:5b:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:22.839662 containerd[1556]: 2025-11-06 00:30:22.822 [INFO][4315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" Namespace="kube-system" Pod="coredns-66bc5c9577-5gb5m" WorkloadEndpoint="172--232--1--216-k8s-coredns--66bc5c9577--5gb5m-eth0" Nov 6 00:30:22.875268 systemd-networkd[1449]: cali577e7b8a7a2: Gained IPv6LL Nov 6 00:30:22.890865 containerd[1556]: time="2025-11-06T00:30:22.890807334Z" level=info msg="connecting to shim 9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d" address="unix:///run/containerd/s/41381f52213204f4afd203f51334695fc1c27802bf2dcdf70ec467e3315193b6" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:22.955293 systemd[1]: Started cri-containerd-9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d.scope - libcontainer container 9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d. Nov 6 00:30:22.965654 containerd[1556]: time="2025-11-06T00:30:22.965587749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sm7cx,Uid:d35ab5e8-6aeb-4e34-b248-eb58835d1058,Namespace:kube-system,Attempt:0,} returns sandbox id \"2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7\"" Nov 6 00:30:22.966869 kubelet[2741]: E1106 00:30:22.966745 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:22.973469 containerd[1556]: time="2025-11-06T00:30:22.973410775Z" level=info msg="CreateContainer within sandbox \"2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:30:22.984278 containerd[1556]: time="2025-11-06T00:30:22.984233012Z" level=info msg="Container 160dfe1602f7cbf73e569464c31d72b91ead08a434c5a8ca4373c080a4500fd1: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:22.988663 containerd[1556]: time="2025-11-06T00:30:22.988614721Z" level=info msg="CreateContainer within sandbox \"2954f362e8eded1d24e47923a8ea99d0c0ad8b0766b85310ca7b357f24bc2bb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"160dfe1602f7cbf73e569464c31d72b91ead08a434c5a8ca4373c080a4500fd1\"" Nov 6 00:30:22.989664 containerd[1556]: time="2025-11-06T00:30:22.989626245Z" level=info msg="StartContainer for \"160dfe1602f7cbf73e569464c31d72b91ead08a434c5a8ca4373c080a4500fd1\"" Nov 6 00:30:22.991484 containerd[1556]: time="2025-11-06T00:30:22.991437339Z" level=info msg="connecting to shim 160dfe1602f7cbf73e569464c31d72b91ead08a434c5a8ca4373c080a4500fd1" address="unix:///run/containerd/s/64476675bd0ea1ee5b9864eb22d8ad6cff5d173e897dfcf70bad94a9067e8753" protocol=ttrpc version=3 Nov 6 00:30:23.021309 systemd[1]: Started cri-containerd-160dfe1602f7cbf73e569464c31d72b91ead08a434c5a8ca4373c080a4500fd1.scope - libcontainer container 160dfe1602f7cbf73e569464c31d72b91ead08a434c5a8ca4373c080a4500fd1. Nov 6 00:30:23.057819 containerd[1556]: time="2025-11-06T00:30:23.057684322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5gb5m,Uid:f8977de6-09a2-44c2-9dd7-aa57e7cd985b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d\"" Nov 6 00:30:23.060614 kubelet[2741]: E1106 00:30:23.060592 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:23.069898 containerd[1556]: time="2025-11-06T00:30:23.067878312Z" level=info msg="CreateContainer within sandbox \"9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:30:23.080506 containerd[1556]: time="2025-11-06T00:30:23.079854504Z" level=info msg="Container 249f27e9d37bf1ec6307bb4d13e345530641fd2663a88b397ea5679b4bf37d6f: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:23.087538 containerd[1556]: time="2025-11-06T00:30:23.087494872Z" level=info msg="CreateContainer within sandbox \"9054ace36f7c9e79391814c0e7a7686d28839cbcd72d375206b739fd6304c88d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"249f27e9d37bf1ec6307bb4d13e345530641fd2663a88b397ea5679b4bf37d6f\"" Nov 6 00:30:23.089696 containerd[1556]: time="2025-11-06T00:30:23.089366006Z" level=info msg="StartContainer for \"249f27e9d37bf1ec6307bb4d13e345530641fd2663a88b397ea5679b4bf37d6f\"" Nov 6 00:30:23.092226 containerd[1556]: time="2025-11-06T00:30:23.092202753Z" level=info msg="connecting to shim 249f27e9d37bf1ec6307bb4d13e345530641fd2663a88b397ea5679b4bf37d6f" address="unix:///run/containerd/s/41381f52213204f4afd203f51334695fc1c27802bf2dcdf70ec467e3315193b6" protocol=ttrpc version=3 Nov 6 00:30:23.105303 containerd[1556]: time="2025-11-06T00:30:23.105278499Z" level=info msg="StartContainer for \"160dfe1602f7cbf73e569464c31d72b91ead08a434c5a8ca4373c080a4500fd1\" returns successfully" Nov 6 00:30:23.132426 systemd[1]: Started cri-containerd-249f27e9d37bf1ec6307bb4d13e345530641fd2663a88b397ea5679b4bf37d6f.scope - libcontainer container 249f27e9d37bf1ec6307bb4d13e345530641fd2663a88b397ea5679b4bf37d6f. Nov 6 00:30:23.190349 containerd[1556]: time="2025-11-06T00:30:23.190285653Z" level=info msg="StartContainer for \"249f27e9d37bf1ec6307bb4d13e345530641fd2663a88b397ea5679b4bf37d6f\" returns successfully" Nov 6 00:30:23.465082 containerd[1556]: time="2025-11-06T00:30:23.464737573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5f4d7b75-nsr7v,Uid:456ae3b3-bf69-483d-be8f-52ebc676c862,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:23.619215 systemd-networkd[1449]: calib7b80a49858: Link UP Nov 6 00:30:23.620678 systemd-networkd[1449]: calib7b80a49858: Gained carrier Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.515 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0 calico-kube-controllers-f5f4d7b75- calico-system 456ae3b3-bf69-483d-be8f-52ebc676c862 876 0 2025-11-06 00:30:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f5f4d7b75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-1-216 calico-kube-controllers-f5f4d7b75-nsr7v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib7b80a49858 [] [] }} ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.516 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.553 [INFO][4549] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" HandleID="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Workload="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.553 [INFO][4549] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" HandleID="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Workload="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-1-216", "pod":"calico-kube-controllers-f5f4d7b75-nsr7v", "timestamp":"2025-11-06 00:30:23.553474344 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.553 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.553 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.553 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.563 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.572 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.577 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.580 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.583 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.583 [INFO][4549] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.584 [INFO][4549] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635 Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.595 [INFO][4549] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.607 [INFO][4549] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.133/26] block=192.168.98.128/26 handle="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.607 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.133/26] handle="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" host="172-232-1-216" Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.607 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:23.641466 containerd[1556]: 2025-11-06 00:30:23.607 [INFO][4549] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.133/26] IPv6=[] ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" HandleID="k8s-pod-network.9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Workload="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" Nov 6 00:30:23.642603 containerd[1556]: 2025-11-06 00:30:23.611 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0", GenerateName:"calico-kube-controllers-f5f4d7b75-", Namespace:"calico-system", SelfLink:"", UID:"456ae3b3-bf69-483d-be8f-52ebc676c862", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5f4d7b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"calico-kube-controllers-f5f4d7b75-nsr7v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7b80a49858", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:23.642603 containerd[1556]: 2025-11-06 00:30:23.611 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.133/32] ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" Nov 6 00:30:23.642603 containerd[1556]: 2025-11-06 00:30:23.611 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7b80a49858 ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" Nov 6 00:30:23.642603 containerd[1556]: 2025-11-06 00:30:23.622 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" Nov 6 00:30:23.642603 containerd[1556]: 2025-11-06 00:30:23.623 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0", GenerateName:"calico-kube-controllers-f5f4d7b75-", Namespace:"calico-system", SelfLink:"", UID:"456ae3b3-bf69-483d-be8f-52ebc676c862", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5f4d7b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635", Pod:"calico-kube-controllers-f5f4d7b75-nsr7v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7b80a49858", MAC:"ce:46:2d:f0:f7:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:23.642603 containerd[1556]: 2025-11-06 00:30:23.636 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" Namespace="calico-system" Pod="calico-kube-controllers-f5f4d7b75-nsr7v" WorkloadEndpoint="172--232--1--216-k8s-calico--kube--controllers--f5f4d7b75--nsr7v-eth0" Nov 6 00:30:23.675823 containerd[1556]: time="2025-11-06T00:30:23.674701561Z" level=info msg="connecting to shim 9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635" address="unix:///run/containerd/s/557eaa9152ad2423e5620e21003e2ce702be60d3e2d4a0875db19e8b881eb9f7" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:23.682170 kubelet[2741]: E1106 00:30:23.681747 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:23.693759 kubelet[2741]: E1106 00:30:23.691054 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:23.694881 kubelet[2741]: E1106 00:30:23.694851 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:30:23.712533 kubelet[2741]: I1106 00:30:23.712474 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5gb5m" podStartSLOduration=34.712463242 podStartE2EDuration="34.712463242s" podCreationTimestamp="2025-11-06 00:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:30:23.712292009 +0000 UTC m=+41.402899013" watchObservedRunningTime="2025-11-06 00:30:23.712463242 +0000 UTC m=+41.403070246" Nov 6 00:30:23.734920 systemd[1]: Started cri-containerd-9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635.scope - libcontainer container 9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635. Nov 6 00:30:23.740368 kubelet[2741]: I1106 00:30:23.739267 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sm7cx" podStartSLOduration=34.738997391 podStartE2EDuration="34.738997391s" podCreationTimestamp="2025-11-06 00:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:30:23.73742382 +0000 UTC m=+41.428030824" watchObservedRunningTime="2025-11-06 00:30:23.738997391 +0000 UTC m=+41.429604395" Nov 6 00:30:23.849193 containerd[1556]: time="2025-11-06T00:30:23.849130665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5f4d7b75-nsr7v,Uid:456ae3b3-bf69-483d-be8f-52ebc676c862,Namespace:calico-system,Attempt:0,} returns sandbox id \"9dab3dac619268c1975b15c1f0b62edbc90f2b950451ee0f136e05bbcbd9e635\"" Nov 6 00:30:23.851284 containerd[1556]: time="2025-11-06T00:30:23.851202061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:30:23.896401 systemd-networkd[1449]: cali12138f76c31: Gained IPv6LL Nov 6 00:30:24.005682 containerd[1556]: time="2025-11-06T00:30:24.005427215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:24.006912 containerd[1556]: time="2025-11-06T00:30:24.006772591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:30:24.006912 containerd[1556]: time="2025-11-06T00:30:24.006805781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:30:24.009652 kubelet[2741]: E1106 00:30:24.009615 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:30:24.010124 kubelet[2741]: E1106 00:30:24.009661 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:30:24.010124 kubelet[2741]: E1106 00:30:24.009776 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5f4d7b75-nsr7v_calico-system(456ae3b3-bf69-483d-be8f-52ebc676c862): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:24.010537 kubelet[2741]: E1106 00:30:24.010508 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:30:24.344938 systemd-networkd[1449]: calif638341444b: Gained IPv6LL Nov 6 00:30:24.467619 containerd[1556]: time="2025-11-06T00:30:24.467302281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-qx6xb,Uid:ad57f6e7-ba94-4071-8188-eaaec8d179ad,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:30:24.468311 containerd[1556]: time="2025-11-06T00:30:24.468262212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-h4bdp,Uid:b84c01a2-1d41-475e-8a7a-47755f9e00e7,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:30:24.605305 systemd-networkd[1449]: cali68fd5452af4: Link UP Nov 6 00:30:24.607267 systemd-networkd[1449]: cali68fd5452af4: Gained carrier Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.524 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0 calico-apiserver-765cfc7478- calico-apiserver b84c01a2-1d41-475e-8a7a-47755f9e00e7 862 0 2025-11-06 00:29:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765cfc7478 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-1-216 calico-apiserver-765cfc7478-h4bdp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68fd5452af4 [] [] }} ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.524 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.558 [INFO][4639] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" HandleID="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Workload="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.558 [INFO][4639] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" HandleID="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Workload="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-1-216", "pod":"calico-apiserver-765cfc7478-h4bdp", "timestamp":"2025-11-06 00:30:24.558540334 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.558 [INFO][4639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.559 [INFO][4639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.559 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.567 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.572 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.577 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.579 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.582 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.582 [INFO][4639] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.585 [INFO][4639] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.590 [INFO][4639] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.595 [INFO][4639] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.134/26] block=192.168.98.128/26 handle="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.595 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.134/26] handle="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" host="172-232-1-216" Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.595 [INFO][4639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:24.623832 containerd[1556]: 2025-11-06 00:30:24.595 [INFO][4639] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.134/26] IPv6=[] ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" HandleID="k8s-pod-network.48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Workload="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" Nov 6 00:30:24.624612 containerd[1556]: 2025-11-06 00:30:24.602 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0", GenerateName:"calico-apiserver-765cfc7478-", Namespace:"calico-apiserver", SelfLink:"", UID:"b84c01a2-1d41-475e-8a7a-47755f9e00e7", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cfc7478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"calico-apiserver-765cfc7478-h4bdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68fd5452af4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:24.624612 containerd[1556]: 2025-11-06 00:30:24.602 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.134/32] ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" Nov 6 00:30:24.624612 containerd[1556]: 2025-11-06 00:30:24.602 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68fd5452af4 ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" Nov 6 00:30:24.624612 containerd[1556]: 2025-11-06 00:30:24.607 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" Nov 6 00:30:24.624612 containerd[1556]: 2025-11-06 00:30:24.608 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0", GenerateName:"calico-apiserver-765cfc7478-", Namespace:"calico-apiserver", SelfLink:"", UID:"b84c01a2-1d41-475e-8a7a-47755f9e00e7", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cfc7478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e", Pod:"calico-apiserver-765cfc7478-h4bdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68fd5452af4", MAC:"4e:a9:7c:c1:13:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:24.624612 containerd[1556]: 2025-11-06 00:30:24.616 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-h4bdp" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--h4bdp-eth0" Nov 6 00:30:24.662514 containerd[1556]: time="2025-11-06T00:30:24.662400129Z" level=info msg="connecting to shim 48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e" address="unix:///run/containerd/s/9a36fd1db7b65ffd6bab9ddb7145771bb8884c510f81e3012c8a4fabba279146" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:24.704629 kubelet[2741]: E1106 00:30:24.704591 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:24.707489 kubelet[2741]: E1106 00:30:24.707467 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:24.708961 kubelet[2741]: E1106 00:30:24.708736 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:30:24.710350 systemd[1]: Started cri-containerd-48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e.scope - libcontainer container 48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e. Nov 6 00:30:24.748055 systemd-networkd[1449]: calibebb9533ea3: Link UP Nov 6 00:30:24.749316 systemd-networkd[1449]: calibebb9533ea3: Gained carrier Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.530 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0 calico-apiserver-765cfc7478- calico-apiserver ad57f6e7-ba94-4071-8188-eaaec8d179ad 867 0 2025-11-06 00:29:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765cfc7478 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-1-216 calico-apiserver-765cfc7478-qx6xb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibebb9533ea3 [] [] }} ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.531 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.571 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" HandleID="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Workload="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.571 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" HandleID="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Workload="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-1-216", "pod":"calico-apiserver-765cfc7478-qx6xb", "timestamp":"2025-11-06 00:30:24.571630251 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.571 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.595 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.595 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.669 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.685 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.692 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.697 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.700 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.701 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.703 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6 Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.718 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.730 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.135/26] block=192.168.98.128/26 handle="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.730 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.135/26] handle="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" host="172-232-1-216" Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.730 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:24.766798 containerd[1556]: 2025-11-06 00:30:24.730 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.135/26] IPv6=[] ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" HandleID="k8s-pod-network.b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Workload="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" Nov 6 00:30:24.770352 containerd[1556]: 2025-11-06 00:30:24.742 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0", GenerateName:"calico-apiserver-765cfc7478-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad57f6e7-ba94-4071-8188-eaaec8d179ad", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cfc7478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"calico-apiserver-765cfc7478-qx6xb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibebb9533ea3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:24.770352 containerd[1556]: 2025-11-06 00:30:24.742 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.135/32] ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" Nov 6 00:30:24.770352 containerd[1556]: 2025-11-06 00:30:24.742 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibebb9533ea3 ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" Nov 6 00:30:24.770352 containerd[1556]: 2025-11-06 00:30:24.747 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" Nov 6 00:30:24.770352 containerd[1556]: 2025-11-06 00:30:24.748 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0", GenerateName:"calico-apiserver-765cfc7478-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad57f6e7-ba94-4071-8188-eaaec8d179ad", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cfc7478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6", Pod:"calico-apiserver-765cfc7478-qx6xb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibebb9533ea3", MAC:"92:31:a1:a7:87:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:24.770352 containerd[1556]: 2025-11-06 00:30:24.759 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" Namespace="calico-apiserver" Pod="calico-apiserver-765cfc7478-qx6xb" WorkloadEndpoint="172--232--1--216-k8s-calico--apiserver--765cfc7478--qx6xb-eth0" Nov 6 00:30:24.792686 systemd-networkd[1449]: calib7b80a49858: Gained IPv6LL Nov 6 00:30:24.799349 containerd[1556]: time="2025-11-06T00:30:24.798348389Z" level=info msg="connecting to shim b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6" address="unix:///run/containerd/s/64a0a7ae77eeb41c6e04e2d3fce3e89fb461c8ccef924ba07965f05e0b9dd040" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:24.833577 systemd[1]: Started cri-containerd-b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6.scope - libcontainer container b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6. Nov 6 00:30:24.890635 containerd[1556]: time="2025-11-06T00:30:24.890592244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-h4bdp,Uid:b84c01a2-1d41-475e-8a7a-47755f9e00e7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"48a8863fdd822b9425aabcaaf3e34928904ab74bbc942f9007f38f429efa166e\"" Nov 6 00:30:24.893841 containerd[1556]: time="2025-11-06T00:30:24.893809913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:30:24.930591 containerd[1556]: time="2025-11-06T00:30:24.930549163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cfc7478-qx6xb,Uid:ad57f6e7-ba94-4071-8188-eaaec8d179ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b070f06d08471a89bde8a763711936ae0dfe012dca9bff3c69b388f7788c94c6\"" Nov 6 00:30:25.046487 containerd[1556]: time="2025-11-06T00:30:25.046388149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:25.047713 containerd[1556]: time="2025-11-06T00:30:25.047668784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:30:25.048358 containerd[1556]: time="2025-11-06T00:30:25.047943287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:25.048392 kubelet[2741]: E1106 00:30:25.048123 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:25.048392 kubelet[2741]: E1106 00:30:25.048192 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:25.048392 kubelet[2741]: E1106 00:30:25.048369 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-h4bdp_calico-apiserver(b84c01a2-1d41-475e-8a7a-47755f9e00e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.048746 kubelet[2741]: E1106 00:30:25.048414 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:30:25.050096 containerd[1556]: time="2025-11-06T00:30:25.050014470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:30:25.220671 containerd[1556]: time="2025-11-06T00:30:25.220552842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:25.222073 containerd[1556]: time="2025-11-06T00:30:25.221981619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:30:25.222177 containerd[1556]: time="2025-11-06T00:30:25.222072530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:25.222396 kubelet[2741]: E1106 00:30:25.222308 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:25.222396 kubelet[2741]: E1106 00:30:25.222346 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:25.223219 kubelet[2741]: E1106 00:30:25.222411 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-qx6xb_calico-apiserver(ad57f6e7-ba94-4071-8188-eaaec8d179ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.223219 kubelet[2741]: E1106 00:30:25.222439 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:30:25.463814 containerd[1556]: time="2025-11-06T00:30:25.463781613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s88hx,Uid:d99fbda4-0f0c-421c-a518-a4c5a391c340,Namespace:calico-system,Attempt:0,}" Nov 6 00:30:25.570634 systemd-networkd[1449]: cali8877cc46453: Link UP Nov 6 00:30:25.573720 systemd-networkd[1449]: cali8877cc46453: Gained carrier Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.498 [INFO][4764] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--1--216-k8s-csi--node--driver--s88hx-eth0 csi-node-driver- calico-system d99fbda4-0f0c-421c-a518-a4c5a391c340 772 0 2025-11-06 00:30:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-1-216 csi-node-driver-s88hx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8877cc46453 [] [] }} ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.498 [INFO][4764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.528 [INFO][4776] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" HandleID="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Workload="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.528 [INFO][4776] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" HandleID="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Workload="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-1-216", "pod":"csi-node-driver-s88hx", "timestamp":"2025-11-06 00:30:25.52829901 +0000 UTC"}, Hostname:"172-232-1-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.528 [INFO][4776] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.528 [INFO][4776] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.528 [INFO][4776] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-1-216' Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.536 [INFO][4776] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.541 [INFO][4776] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.546 [INFO][4776] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.548 [INFO][4776] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.550 [INFO][4776] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.550 [INFO][4776] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.552 [INFO][4776] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373 Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.557 [INFO][4776] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.563 [INFO][4776] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.136/26] block=192.168.98.128/26 handle="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.563 [INFO][4776] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.136/26] handle="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" host="172-232-1-216" Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.563 [INFO][4776] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:30:25.593649 containerd[1556]: 2025-11-06 00:30:25.563 [INFO][4776] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.136/26] IPv6=[] ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" HandleID="k8s-pod-network.5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Workload="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" Nov 6 00:30:25.594949 containerd[1556]: 2025-11-06 00:30:25.567 [INFO][4764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-csi--node--driver--s88hx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d99fbda4-0f0c-421c-a518-a4c5a391c340", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"", Pod:"csi-node-driver-s88hx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8877cc46453", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:25.594949 containerd[1556]: 2025-11-06 00:30:25.567 [INFO][4764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.136/32] ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" Nov 6 00:30:25.594949 containerd[1556]: 2025-11-06 00:30:25.567 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8877cc46453 ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" Nov 6 00:30:25.594949 containerd[1556]: 2025-11-06 00:30:25.570 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" Nov 6 00:30:25.594949 containerd[1556]: 2025-11-06 00:30:25.573 [INFO][4764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--1--216-k8s-csi--node--driver--s88hx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d99fbda4-0f0c-421c-a518-a4c5a391c340", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-1-216", ContainerID:"5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373", Pod:"csi-node-driver-s88hx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8877cc46453", MAC:"72:74:9c:49:d5:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:30:25.594949 containerd[1556]: 2025-11-06 00:30:25.586 [INFO][4764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" Namespace="calico-system" Pod="csi-node-driver-s88hx" WorkloadEndpoint="172--232--1--216-k8s-csi--node--driver--s88hx-eth0" Nov 6 00:30:25.618658 containerd[1556]: time="2025-11-06T00:30:25.618566067Z" level=info msg="connecting to shim 5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373" address="unix:///run/containerd/s/ff572f8c2c0103b78e8e1fb5bb96e0854f68258458e5f557979fcb3290aa0820" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:25.649282 systemd[1]: Started cri-containerd-5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373.scope - libcontainer container 5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373. Nov 6 00:30:25.685290 containerd[1556]: time="2025-11-06T00:30:25.685202648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s88hx,Uid:d99fbda4-0f0c-421c-a518-a4c5a391c340,Namespace:calico-system,Attempt:0,} returns sandbox id \"5dff97e0de30ce220e71d49a891eedd4e2aaa76cbf00020107d9b13f79690373\"" Nov 6 00:30:25.687437 containerd[1556]: time="2025-11-06T00:30:25.687316171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:30:25.708186 kubelet[2741]: E1106 00:30:25.707952 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:30:25.715284 kubelet[2741]: E1106 00:30:25.715260 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:30:25.715857 kubelet[2741]: E1106 00:30:25.715552 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:25.716531 kubelet[2741]: E1106 00:30:25.716500 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:30:25.833535 containerd[1556]: time="2025-11-06T00:30:25.833248846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:25.835543 containerd[1556]: time="2025-11-06T00:30:25.835417030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:30:25.835543 containerd[1556]: time="2025-11-06T00:30:25.835457471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:30:25.835783 kubelet[2741]: E1106 00:30:25.835697 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:30:25.835783 kubelet[2741]: E1106 00:30:25.835767 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:30:25.836340 kubelet[2741]: E1106 00:30:25.836304 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.837624 containerd[1556]: time="2025-11-06T00:30:25.837480064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:30:25.975593 containerd[1556]: time="2025-11-06T00:30:25.975537039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:25.976518 containerd[1556]: time="2025-11-06T00:30:25.976443730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:30:25.976709 containerd[1556]: time="2025-11-06T00:30:25.976535451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:30:25.976979 kubelet[2741]: E1106 00:30:25.976628 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:30:25.976979 kubelet[2741]: E1106 00:30:25.976705 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:30:25.976979 kubelet[2741]: E1106 00:30:25.976810 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.977488 kubelet[2741]: E1106 00:30:25.976887 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:26.136501 systemd-networkd[1449]: cali68fd5452af4: Gained IPv6LL Nov 6 00:30:26.200347 systemd-networkd[1449]: calibebb9533ea3: Gained IPv6LL Nov 6 00:30:26.717653 kubelet[2741]: E1106 00:30:26.717603 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:30:26.719721 kubelet[2741]: E1106 00:30:26.719681 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:30:26.719815 kubelet[2741]: E1106 00:30:26.719781 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:27.352327 systemd-networkd[1449]: cali8877cc46453: Gained IPv6LL Nov 6 00:30:29.466428 containerd[1556]: time="2025-11-06T00:30:29.466382320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:30:29.610906 containerd[1556]: time="2025-11-06T00:30:29.610838154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:29.612352 containerd[1556]: time="2025-11-06T00:30:29.612280006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:30:29.612352 containerd[1556]: time="2025-11-06T00:30:29.612319157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:30:29.612521 kubelet[2741]: E1106 00:30:29.612471 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:29.612521 kubelet[2741]: E1106 00:30:29.612512 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:29.612898 kubelet[2741]: E1106 00:30:29.612600 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:29.614699 containerd[1556]: time="2025-11-06T00:30:29.614490336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:30:29.735715 containerd[1556]: time="2025-11-06T00:30:29.735581684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:29.736753 containerd[1556]: time="2025-11-06T00:30:29.736724625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:30:29.736860 containerd[1556]: time="2025-11-06T00:30:29.736775865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:30:29.736971 kubelet[2741]: E1106 00:30:29.736899 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:29.737237 kubelet[2741]: E1106 00:30:29.736976 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:29.738274 kubelet[2741]: E1106 00:30:29.737337 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:29.738274 kubelet[2741]: E1106 00:30:29.737412 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:30:37.464499 containerd[1556]: time="2025-11-06T00:30:37.464440034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:30:37.771650 containerd[1556]: time="2025-11-06T00:30:37.771588112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:37.773455 containerd[1556]: time="2025-11-06T00:30:37.773305021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:30:37.773455 containerd[1556]: time="2025-11-06T00:30:37.773338561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:30:37.773701 kubelet[2741]: E1106 00:30:37.773644 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:30:37.774332 kubelet[2741]: E1106 00:30:37.773705 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:30:37.774332 kubelet[2741]: E1106 00:30:37.773786 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5f4d7b75-nsr7v_calico-system(456ae3b3-bf69-483d-be8f-52ebc676c862): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:37.774332 kubelet[2741]: E1106 00:30:37.773822 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:30:38.467533 containerd[1556]: time="2025-11-06T00:30:38.466400044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:30:38.604430 containerd[1556]: time="2025-11-06T00:30:38.604340374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:38.605757 containerd[1556]: time="2025-11-06T00:30:38.605676761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:30:38.605906 containerd[1556]: time="2025-11-06T00:30:38.605779852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:30:38.606122 kubelet[2741]: E1106 00:30:38.606046 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:30:38.606279 kubelet[2741]: E1106 00:30:38.606134 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:30:38.606378 kubelet[2741]: E1106 00:30:38.606344 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:38.608396 containerd[1556]: time="2025-11-06T00:30:38.608227804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:30:38.757575 containerd[1556]: time="2025-11-06T00:30:38.757510604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:38.759613 containerd[1556]: time="2025-11-06T00:30:38.759230443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:30:38.759613 containerd[1556]: time="2025-11-06T00:30:38.759478754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:38.759842 kubelet[2741]: E1106 00:30:38.759805 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:38.759890 kubelet[2741]: E1106 00:30:38.759859 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:38.760093 kubelet[2741]: E1106 00:30:38.760051 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-h4bdp_calico-apiserver(b84c01a2-1d41-475e-8a7a-47755f9e00e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:38.760139 kubelet[2741]: E1106 00:30:38.760102 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:30:38.760892 containerd[1556]: time="2025-11-06T00:30:38.760844671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:30:38.924581 containerd[1556]: time="2025-11-06T00:30:38.924513924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:38.925639 containerd[1556]: time="2025-11-06T00:30:38.925596450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:30:38.925749 containerd[1556]: time="2025-11-06T00:30:38.925721001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:38.926966 kubelet[2741]: E1106 00:30:38.926005 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:30:38.926966 kubelet[2741]: E1106 00:30:38.926104 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:30:38.926966 kubelet[2741]: E1106 00:30:38.926395 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-ggwql_calico-system(0b8ecf15-79f1-43df-a5c1-419b36087e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:38.927890 containerd[1556]: time="2025-11-06T00:30:38.926888986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:30:38.927946 kubelet[2741]: E1106 00:30:38.927227 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:30:39.054850 containerd[1556]: time="2025-11-06T00:30:39.054571868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:39.060578 containerd[1556]: time="2025-11-06T00:30:39.060477007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:30:39.060674 containerd[1556]: time="2025-11-06T00:30:39.060613638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:30:39.061133 kubelet[2741]: E1106 00:30:39.061069 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:30:39.062942 kubelet[2741]: E1106 00:30:39.061468 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:30:39.062942 kubelet[2741]: E1106 00:30:39.061632 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:39.062942 kubelet[2741]: E1106 00:30:39.061705 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:39.464730 containerd[1556]: time="2025-11-06T00:30:39.463932680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:30:39.597477 containerd[1556]: time="2025-11-06T00:30:39.597378288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:39.599068 containerd[1556]: time="2025-11-06T00:30:39.598976666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:30:39.599259 containerd[1556]: time="2025-11-06T00:30:39.599090667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:39.599410 kubelet[2741]: E1106 00:30:39.599300 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:39.599410 kubelet[2741]: E1106 00:30:39.599384 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:39.599588 kubelet[2741]: E1106 00:30:39.599459 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-qx6xb_calico-apiserver(ad57f6e7-ba94-4071-8188-eaaec8d179ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:39.599588 kubelet[2741]: E1106 00:30:39.599493 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:30:40.468702 kubelet[2741]: E1106 00:30:40.468591 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:30:47.803549 containerd[1556]: time="2025-11-06T00:30:47.803456342Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"7632c69a5052bec6a67f1e7831b85566acfff0778d2ac959ddb995ce323abf1e\" pid:4877 exited_at:{seconds:1762389047 nanos:801636550}" Nov 6 00:30:47.808105 kubelet[2741]: E1106 00:30:47.807805 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:30:49.464054 kubelet[2741]: E1106 00:30:49.463971 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:30:50.472407 kubelet[2741]: E1106 00:30:50.472350 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:30:51.464025 kubelet[2741]: E1106 00:30:51.463667 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:30:51.464025 kubelet[2741]: E1106 00:30:51.463958 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:30:52.466511 containerd[1556]: time="2025-11-06T00:30:52.465383719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:30:52.469759 kubelet[2741]: E1106 00:30:52.469690 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:30:52.608715 containerd[1556]: time="2025-11-06T00:30:52.608653204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:52.609948 containerd[1556]: time="2025-11-06T00:30:52.609854475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:30:52.609948 containerd[1556]: time="2025-11-06T00:30:52.609925379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:30:52.610335 kubelet[2741]: E1106 00:30:52.610236 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:52.610335 kubelet[2741]: E1106 00:30:52.610316 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:52.610738 kubelet[2741]: E1106 00:30:52.610691 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:52.612963 containerd[1556]: time="2025-11-06T00:30:52.612912570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:30:52.783381 containerd[1556]: time="2025-11-06T00:30:52.783307401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:52.789044 containerd[1556]: time="2025-11-06T00:30:52.788880429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:30:52.789044 containerd[1556]: time="2025-11-06T00:30:52.789019957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:30:52.789245 kubelet[2741]: E1106 00:30:52.789172 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:52.789245 kubelet[2741]: E1106 00:30:52.789225 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:52.789468 kubelet[2741]: E1106 00:30:52.789310 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:52.789468 kubelet[2741]: E1106 00:30:52.789374 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:31:00.472038 containerd[1556]: time="2025-11-06T00:31:00.471594641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:31:00.607241 containerd[1556]: time="2025-11-06T00:31:00.607131704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:00.608869 containerd[1556]: time="2025-11-06T00:31:00.608631525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:31:00.608869 containerd[1556]: time="2025-11-06T00:31:00.608661053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:00.609350 kubelet[2741]: E1106 00:31:00.609274 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:31:00.611080 kubelet[2741]: E1106 00:31:00.609769 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:31:00.611240 kubelet[2741]: E1106 00:31:00.611216 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-ggwql_calico-system(0b8ecf15-79f1-43df-a5c1-419b36087e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:00.611360 kubelet[2741]: E1106 00:31:00.611326 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:31:02.485611 containerd[1556]: time="2025-11-06T00:31:02.485282490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:31:02.633882 containerd[1556]: time="2025-11-06T00:31:02.633518283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:02.634653 containerd[1556]: time="2025-11-06T00:31:02.634615168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:31:02.634653 containerd[1556]: time="2025-11-06T00:31:02.634699042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:31:02.639028 kubelet[2741]: E1106 00:31:02.638935 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:31:02.639028 kubelet[2741]: E1106 00:31:02.639002 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:31:02.641381 kubelet[2741]: E1106 00:31:02.641312 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5f4d7b75-nsr7v_calico-system(456ae3b3-bf69-483d-be8f-52ebc676c862): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:02.641663 kubelet[2741]: E1106 00:31:02.641622 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:31:03.467448 containerd[1556]: time="2025-11-06T00:31:03.466884792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:31:03.470000 kubelet[2741]: E1106 00:31:03.467903 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:31:03.658569 containerd[1556]: time="2025-11-06T00:31:03.658514173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:03.660055 containerd[1556]: time="2025-11-06T00:31:03.659950647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:31:03.661193 containerd[1556]: time="2025-11-06T00:31:03.660113176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:31:03.661462 kubelet[2741]: E1106 00:31:03.661408 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:31:03.661462 kubelet[2741]: E1106 00:31:03.661462 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:31:03.662789 kubelet[2741]: E1106 00:31:03.661833 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:03.663316 containerd[1556]: time="2025-11-06T00:31:03.663254227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:31:03.946204 containerd[1556]: time="2025-11-06T00:31:03.945923757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:03.949388 containerd[1556]: time="2025-11-06T00:31:03.947761494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:31:03.949628 containerd[1556]: time="2025-11-06T00:31:03.949506458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:31:03.949948 kubelet[2741]: E1106 00:31:03.949875 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:31:03.949948 kubelet[2741]: E1106 00:31:03.949924 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:31:03.950114 kubelet[2741]: E1106 00:31:03.950094 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:03.950583 kubelet[2741]: E1106 00:31:03.950530 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:31:04.467260 containerd[1556]: time="2025-11-06T00:31:04.467009144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:31:04.603810 containerd[1556]: time="2025-11-06T00:31:04.603564442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:04.605031 containerd[1556]: time="2025-11-06T00:31:04.604882046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:31:04.605207 containerd[1556]: time="2025-11-06T00:31:04.604971930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:04.605386 kubelet[2741]: E1106 00:31:04.605340 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:04.605440 kubelet[2741]: E1106 00:31:04.605391 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:04.606035 kubelet[2741]: E1106 00:31:04.605993 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-h4bdp_calico-apiserver(b84c01a2-1d41-475e-8a7a-47755f9e00e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:04.606081 kubelet[2741]: E1106 00:31:04.606058 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:31:04.607339 containerd[1556]: time="2025-11-06T00:31:04.607287549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:31:04.768409 containerd[1556]: time="2025-11-06T00:31:04.768347083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:04.769348 containerd[1556]: time="2025-11-06T00:31:04.769308440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:31:04.769444 containerd[1556]: time="2025-11-06T00:31:04.769404394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:04.769670 kubelet[2741]: E1106 00:31:04.769619 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:04.769952 kubelet[2741]: E1106 00:31:04.769681 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:04.769952 kubelet[2741]: E1106 00:31:04.769769 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-qx6xb_calico-apiserver(ad57f6e7-ba94-4071-8188-eaaec8d179ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:04.769952 kubelet[2741]: E1106 00:31:04.769804 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:31:08.480504 kubelet[2741]: E1106 00:31:08.479664 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:31:11.466700 kubelet[2741]: E1106 00:31:11.465606 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:31:15.462947 kubelet[2741]: E1106 00:31:15.462591 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:31:16.470773 kubelet[2741]: E1106 00:31:16.469957 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:31:16.472677 kubelet[2741]: E1106 00:31:16.470560 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:31:17.462207 kubelet[2741]: E1106 00:31:17.462163 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:31:17.464874 kubelet[2741]: E1106 00:31:17.464835 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:31:17.762246 containerd[1556]: time="2025-11-06T00:31:17.762137632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"b5b84e15b3aa2dc729e8cff4e2ff4bd5a6210f6ba0d53688097018ee5ee91e7a\" pid:4918 exited_at:{seconds:1762389077 nanos:761803978}" Nov 6 00:31:18.463962 kubelet[2741]: E1106 00:31:18.463929 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:31:19.466033 kubelet[2741]: E1106 00:31:19.465989 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:31:19.467550 kubelet[2741]: E1106 00:31:19.466118 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:31:26.467014 kubelet[2741]: E1106 00:31:26.466944 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:31:28.467725 kubelet[2741]: E1106 00:31:28.467097 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:31:28.474175 kubelet[2741]: E1106 00:31:28.473865 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:31:31.466036 kubelet[2741]: E1106 00:31:31.465654 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:31:31.470136 kubelet[2741]: E1106 00:31:31.469480 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:31:31.470136 kubelet[2741]: E1106 00:31:31.469617 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:31:32.465850 kubelet[2741]: E1106 00:31:32.465746 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:31:40.462673 kubelet[2741]: E1106 00:31:40.462630 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:31:41.465488 kubelet[2741]: E1106 00:31:41.465329 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:31:41.468031 containerd[1556]: time="2025-11-06T00:31:41.467674109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:31:41.632915 containerd[1556]: time="2025-11-06T00:31:41.632848106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:41.634452 containerd[1556]: time="2025-11-06T00:31:41.634299946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:31:41.634452 containerd[1556]: time="2025-11-06T00:31:41.634414852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:31:41.635271 kubelet[2741]: E1106 00:31:41.635200 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:31:41.635405 kubelet[2741]: E1106 00:31:41.635373 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:31:41.635739 kubelet[2741]: E1106 00:31:41.635718 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:41.637099 containerd[1556]: time="2025-11-06T00:31:41.637029829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:31:41.772169 containerd[1556]: time="2025-11-06T00:31:41.772107162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:41.773469 containerd[1556]: time="2025-11-06T00:31:41.773345347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:31:41.773469 containerd[1556]: time="2025-11-06T00:31:41.773422065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:41.773879 kubelet[2741]: E1106 00:31:41.773772 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:31:41.773879 kubelet[2741]: E1106 00:31:41.773854 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:31:41.774273 kubelet[2741]: E1106 00:31:41.774212 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-ggwql_calico-system(0b8ecf15-79f1-43df-a5c1-419b36087e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:41.774544 containerd[1556]: time="2025-11-06T00:31:41.774515005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:31:41.776467 kubelet[2741]: E1106 00:31:41.776401 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:31:41.908273 containerd[1556]: time="2025-11-06T00:31:41.908099759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:41.909284 containerd[1556]: time="2025-11-06T00:31:41.909182048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:31:41.909284 containerd[1556]: time="2025-11-06T00:31:41.909257086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:31:41.910008 kubelet[2741]: E1106 00:31:41.909558 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:31:41.910008 kubelet[2741]: E1106 00:31:41.909617 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:31:41.910008 kubelet[2741]: E1106 00:31:41.909676 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-669d4994db-c2xm2_calico-system(5fa7c5d6-8a14-4c07-a175-2cee4871f07f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:41.910121 kubelet[2741]: E1106 00:31:41.909709 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:31:42.467202 kubelet[2741]: E1106 00:31:42.466363 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:31:43.465362 kubelet[2741]: E1106 00:31:43.464995 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:31:43.465905 containerd[1556]: time="2025-11-06T00:31:43.465865144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:31:43.700050 containerd[1556]: time="2025-11-06T00:31:43.699994377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:43.702037 containerd[1556]: time="2025-11-06T00:31:43.701529757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:31:43.702037 containerd[1556]: time="2025-11-06T00:31:43.701534556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:31:43.703053 kubelet[2741]: E1106 00:31:43.702768 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:31:43.704055 kubelet[2741]: E1106 00:31:43.703666 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:31:43.704386 kubelet[2741]: E1106 00:31:43.704128 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5f4d7b75-nsr7v_calico-system(456ae3b3-bf69-483d-be8f-52ebc676c862): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:43.704638 kubelet[2741]: E1106 00:31:43.704464 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:31:45.463128 containerd[1556]: time="2025-11-06T00:31:45.463077570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:31:45.592609 containerd[1556]: time="2025-11-06T00:31:45.592571625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:45.594084 containerd[1556]: time="2025-11-06T00:31:45.593960138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:31:45.594084 containerd[1556]: time="2025-11-06T00:31:45.594056657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:31:45.596522 kubelet[2741]: E1106 00:31:45.596468 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:31:45.597656 kubelet[2741]: E1106 00:31:45.597089 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:31:45.597656 kubelet[2741]: E1106 00:31:45.597281 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:45.600220 containerd[1556]: time="2025-11-06T00:31:45.600129440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:31:45.745029 containerd[1556]: time="2025-11-06T00:31:45.744870962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:45.745749 containerd[1556]: time="2025-11-06T00:31:45.745663351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:31:45.745749 containerd[1556]: time="2025-11-06T00:31:45.745744149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:31:45.746207 kubelet[2741]: E1106 00:31:45.745900 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:31:45.746207 kubelet[2741]: E1106 00:31:45.745954 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:31:45.746522 kubelet[2741]: E1106 00:31:45.746466 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s88hx_calico-system(d99fbda4-0f0c-421c-a518-a4c5a391c340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:45.746640 kubelet[2741]: E1106 00:31:45.746573 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:31:47.760510 containerd[1556]: time="2025-11-06T00:31:47.760466899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"de052f900e9c4e6db27799f3c11a59e44fcb4d036ea05ecacc94d473b0b92ad3\" pid:4955 exited_at:{seconds:1762389107 nanos:759990101}" Nov 6 00:31:51.832226 systemd[1]: Started sshd@7-172.232.1.216:22-139.178.89.65:37576.service - OpenSSH per-connection server daemon (139.178.89.65:37576). Nov 6 00:31:52.185689 sshd[4979]: Accepted publickey for core from 139.178.89.65 port 37576 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:31:52.187636 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:31:52.194403 systemd-logind[1532]: New session 8 of user core. Nov 6 00:31:52.199299 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:31:52.465604 kubelet[2741]: E1106 00:31:52.465135 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:31:52.569620 sshd[4982]: Connection closed by 139.178.89.65 port 37576 Nov 6 00:31:52.569925 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Nov 6 00:31:52.579406 systemd[1]: sshd@7-172.232.1.216:22-139.178.89.65:37576.service: Deactivated successfully. Nov 6 00:31:52.583515 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:31:52.587366 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:31:52.592509 systemd-logind[1532]: Removed session 8. Nov 6 00:31:54.466748 kubelet[2741]: E1106 00:31:54.466677 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:31:54.468630 kubelet[2741]: E1106 00:31:54.468587 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:31:55.463812 containerd[1556]: time="2025-11-06T00:31:55.463643743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:31:55.603316 containerd[1556]: time="2025-11-06T00:31:55.603262854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:55.604508 containerd[1556]: time="2025-11-06T00:31:55.604450477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:31:55.604508 containerd[1556]: time="2025-11-06T00:31:55.604483117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:55.604897 kubelet[2741]: E1106 00:31:55.604826 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:55.604897 kubelet[2741]: E1106 00:31:55.604869 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:55.605563 kubelet[2741]: E1106 00:31:55.604935 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-h4bdp_calico-apiserver(b84c01a2-1d41-475e-8a7a-47755f9e00e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:55.605563 kubelet[2741]: E1106 00:31:55.604968 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:31:56.466511 kubelet[2741]: E1106 00:31:56.466285 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:31:57.464862 containerd[1556]: time="2025-11-06T00:31:57.464670671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:31:57.617276 containerd[1556]: time="2025-11-06T00:31:57.617216142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:57.618063 containerd[1556]: time="2025-11-06T00:31:57.618030025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:31:57.618133 containerd[1556]: time="2025-11-06T00:31:57.618105143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:57.618399 kubelet[2741]: E1106 00:31:57.618357 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:57.620023 kubelet[2741]: E1106 00:31:57.618756 2741 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:57.620121 kubelet[2741]: E1106 00:31:57.620098 2741 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-765cfc7478-qx6xb_calico-apiserver(ad57f6e7-ba94-4071-8188-eaaec8d179ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:57.620278 kubelet[2741]: E1106 00:31:57.620199 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:31:57.634346 systemd[1]: Started sshd@8-172.232.1.216:22-139.178.89.65:55930.service - OpenSSH per-connection server daemon (139.178.89.65:55930). Nov 6 00:31:57.983239 sshd[5011]: Accepted publickey for core from 139.178.89.65 port 55930 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:31:57.985956 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:31:57.994741 systemd-logind[1532]: New session 9 of user core. Nov 6 00:31:58.003788 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:31:58.314882 sshd[5014]: Connection closed by 139.178.89.65 port 55930 Nov 6 00:31:58.315548 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Nov 6 00:31:58.320918 systemd[1]: sshd@8-172.232.1.216:22-139.178.89.65:55930.service: Deactivated successfully. Nov 6 00:31:58.321424 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:31:58.326990 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:31:58.331209 systemd-logind[1532]: Removed session 9. Nov 6 00:32:03.376787 systemd[1]: Started sshd@9-172.232.1.216:22-139.178.89.65:55942.service - OpenSSH per-connection server daemon (139.178.89.65:55942). Nov 6 00:32:03.733976 sshd[5027]: Accepted publickey for core from 139.178.89.65 port 55942 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:03.733819 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:03.742355 systemd-logind[1532]: New session 10 of user core. Nov 6 00:32:03.748280 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:32:04.090729 sshd[5030]: Connection closed by 139.178.89.65 port 55942 Nov 6 00:32:04.093545 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:04.096961 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:32:04.098859 systemd[1]: sshd@9-172.232.1.216:22-139.178.89.65:55942.service: Deactivated successfully. Nov 6 00:32:04.101915 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:32:04.107689 systemd-logind[1532]: Removed session 10. Nov 6 00:32:04.156226 systemd[1]: Started sshd@10-172.232.1.216:22-139.178.89.65:55948.service - OpenSSH per-connection server daemon (139.178.89.65:55948). Nov 6 00:32:04.479315 kubelet[2741]: E1106 00:32:04.478851 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:32:04.507503 sshd[5047]: Accepted publickey for core from 139.178.89.65 port 55948 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:04.511829 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:04.518768 systemd-logind[1532]: New session 11 of user core. Nov 6 00:32:04.527296 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:32:04.855996 sshd[5050]: Connection closed by 139.178.89.65 port 55948 Nov 6 00:32:04.856874 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:04.862113 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:32:04.862982 systemd[1]: sshd@10-172.232.1.216:22-139.178.89.65:55948.service: Deactivated successfully. Nov 6 00:32:04.866067 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:32:04.868286 systemd-logind[1532]: Removed session 11. Nov 6 00:32:04.923385 systemd[1]: Started sshd@11-172.232.1.216:22-139.178.89.65:55964.service - OpenSSH per-connection server daemon (139.178.89.65:55964). Nov 6 00:32:05.278007 sshd[5059]: Accepted publickey for core from 139.178.89.65 port 55964 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:05.279071 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:05.288200 systemd-logind[1532]: New session 12 of user core. Nov 6 00:32:05.294279 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:32:05.463669 kubelet[2741]: E1106 00:32:05.463597 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:32:05.614524 sshd[5062]: Connection closed by 139.178.89.65 port 55964 Nov 6 00:32:05.615636 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:05.620628 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:32:05.622678 systemd[1]: sshd@11-172.232.1.216:22-139.178.89.65:55964.service: Deactivated successfully. Nov 6 00:32:05.625922 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:32:05.630801 systemd-logind[1532]: Removed session 12. Nov 6 00:32:06.472181 kubelet[2741]: E1106 00:32:06.471835 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:32:08.467640 kubelet[2741]: E1106 00:32:08.467281 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 6 00:32:08.470138 kubelet[2741]: E1106 00:32:08.470094 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:32:09.463238 kubelet[2741]: E1106 00:32:09.463191 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:32:09.464770 kubelet[2741]: E1106 00:32:09.464690 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:32:10.686010 systemd[1]: Started sshd@12-172.232.1.216:22-139.178.89.65:33170.service - OpenSSH per-connection server daemon (139.178.89.65:33170). Nov 6 00:32:11.061875 sshd[5074]: Accepted publickey for core from 139.178.89.65 port 33170 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:11.064567 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:11.072232 systemd-logind[1532]: New session 13 of user core. Nov 6 00:32:11.076300 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:32:11.400360 sshd[5077]: Connection closed by 139.178.89.65 port 33170 Nov 6 00:32:11.401490 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:11.406179 systemd[1]: sshd@12-172.232.1.216:22-139.178.89.65:33170.service: Deactivated successfully. Nov 6 00:32:11.408711 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:32:11.409983 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:32:11.411948 systemd-logind[1532]: Removed session 13. Nov 6 00:32:11.464424 systemd[1]: Started sshd@13-172.232.1.216:22-139.178.89.65:33180.service - OpenSSH per-connection server daemon (139.178.89.65:33180). Nov 6 00:32:11.826723 sshd[5089]: Accepted publickey for core from 139.178.89.65 port 33180 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:11.828033 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:11.833715 systemd-logind[1532]: New session 14 of user core. Nov 6 00:32:11.841284 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:32:12.436356 sshd[5092]: Connection closed by 139.178.89.65 port 33180 Nov 6 00:32:12.437439 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:12.448371 systemd[1]: sshd@13-172.232.1.216:22-139.178.89.65:33180.service: Deactivated successfully. Nov 6 00:32:12.452776 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:32:12.455662 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:32:12.458618 systemd-logind[1532]: Removed session 14. Nov 6 00:32:12.504053 systemd[1]: Started sshd@14-172.232.1.216:22-139.178.89.65:33190.service - OpenSSH per-connection server daemon (139.178.89.65:33190). Nov 6 00:32:12.883855 sshd[5102]: Accepted publickey for core from 139.178.89.65 port 33190 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:12.885683 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:12.891840 systemd-logind[1532]: New session 15 of user core. Nov 6 00:32:12.905402 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:32:13.950737 sshd[5105]: Connection closed by 139.178.89.65 port 33190 Nov 6 00:32:13.955334 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:13.962306 systemd[1]: sshd@14-172.232.1.216:22-139.178.89.65:33190.service: Deactivated successfully. Nov 6 00:32:13.965143 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:32:13.969445 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:32:13.974436 systemd-logind[1532]: Removed session 15. Nov 6 00:32:14.011592 systemd[1]: Started sshd@15-172.232.1.216:22-139.178.89.65:33204.service - OpenSSH per-connection server daemon (139.178.89.65:33204). Nov 6 00:32:14.379544 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 33204 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:14.380810 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:14.387329 systemd-logind[1532]: New session 16 of user core. Nov 6 00:32:14.393398 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:32:14.911345 sshd[5123]: Connection closed by 139.178.89.65 port 33204 Nov 6 00:32:14.914066 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:14.918564 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:32:14.920377 systemd[1]: sshd@15-172.232.1.216:22-139.178.89.65:33204.service: Deactivated successfully. Nov 6 00:32:14.924323 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:32:14.927713 systemd-logind[1532]: Removed session 16. Nov 6 00:32:14.976449 systemd[1]: Started sshd@16-172.232.1.216:22-139.178.89.65:33216.service - OpenSSH per-connection server daemon (139.178.89.65:33216). Nov 6 00:32:15.325177 sshd[5133]: Accepted publickey for core from 139.178.89.65 port 33216 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:15.327376 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:15.332550 systemd-logind[1532]: New session 17 of user core. Nov 6 00:32:15.339342 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:32:15.678224 sshd[5138]: Connection closed by 139.178.89.65 port 33216 Nov 6 00:32:15.679383 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:15.685779 systemd[1]: sshd@16-172.232.1.216:22-139.178.89.65:33216.service: Deactivated successfully. Nov 6 00:32:15.688728 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:32:15.690796 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:32:15.693935 systemd-logind[1532]: Removed session 17. Nov 6 00:32:16.464572 kubelet[2741]: E1106 00:32:16.464525 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:32:17.756095 containerd[1556]: time="2025-11-06T00:32:17.756001392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03ac435b7f1904f9aee5001318e7cb7f59496f734756a4c1425553721ecc4e3\" id:\"3279b83d3034c015aa69d72cd2ba96d6e7042c7f9ab26ccf298b143e1c184178\" pid:5162 exited_at:{seconds:1762389137 nanos:755514589}" Nov 6 00:32:19.463938 kubelet[2741]: E1106 00:32:19.463798 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5f4d7b75-nsr7v" podUID="456ae3b3-bf69-483d-be8f-52ebc676c862" Nov 6 00:32:20.464635 kubelet[2741]: E1106 00:32:20.464538 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-qx6xb" podUID="ad57f6e7-ba94-4071-8188-eaaec8d179ad" Nov 6 00:32:20.467562 kubelet[2741]: E1106 00:32:20.464907 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-765cfc7478-h4bdp" podUID="b84c01a2-1d41-475e-8a7a-47755f9e00e7" Nov 6 00:32:20.748281 systemd[1]: Started sshd@17-172.232.1.216:22-139.178.89.65:57078.service - OpenSSH per-connection server daemon (139.178.89.65:57078). Nov 6 00:32:21.132212 sshd[5178]: Accepted publickey for core from 139.178.89.65 port 57078 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:21.136864 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:21.144637 systemd-logind[1532]: New session 18 of user core. Nov 6 00:32:21.151474 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:32:21.457360 sshd[5181]: Connection closed by 139.178.89.65 port 57078 Nov 6 00:32:21.458107 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:21.464309 systemd[1]: sshd@17-172.232.1.216:22-139.178.89.65:57078.service: Deactivated successfully. Nov 6 00:32:21.467981 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:32:21.471329 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:32:21.475854 systemd-logind[1532]: Removed session 18. Nov 6 00:32:22.464624 kubelet[2741]: E1106 00:32:22.464558 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s88hx" podUID="d99fbda4-0f0c-421c-a518-a4c5a391c340" Nov 6 00:32:23.465035 kubelet[2741]: E1106 00:32:23.464996 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-669d4994db-c2xm2" podUID="5fa7c5d6-8a14-4c07-a175-2cee4871f07f" Nov 6 00:32:26.515436 systemd[1]: Started sshd@18-172.232.1.216:22-139.178.89.65:51610.service - OpenSSH per-connection server daemon (139.178.89.65:51610). Nov 6 00:32:26.854355 sshd[5193]: Accepted publickey for core from 139.178.89.65 port 51610 ssh2: RSA SHA256:lyj0t+bn7cbefkEkn/goJ5XaNxmH5xoObbwhBovCbAE Nov 6 00:32:26.855889 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:26.862982 systemd-logind[1532]: New session 19 of user core. Nov 6 00:32:26.865312 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:32:27.199141 sshd[5196]: Connection closed by 139.178.89.65 port 51610 Nov 6 00:32:27.201591 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:27.209192 systemd[1]: sshd@18-172.232.1.216:22-139.178.89.65:51610.service: Deactivated successfully. Nov 6 00:32:27.213656 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:32:27.216416 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:32:27.218939 systemd-logind[1532]: Removed session 19. Nov 6 00:32:27.463653 kubelet[2741]: E1106 00:32:27.463508 2741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ggwql" podUID="0b8ecf15-79f1-43df-a5c1-419b36087e14" Nov 6 00:32:28.462817 kubelet[2741]: E1106 00:32:28.462371 2741 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15"