Nov 8 00:37:47.000783 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:37:47.000807 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:37:47.000816 kernel: BIOS-provided physical RAM map: Nov 8 00:37:47.000822 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 8 00:37:47.000827 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 8 00:37:47.000836 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:37:47.000843 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 8 00:37:47.000849 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 8 00:37:47.000854 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:37:47.000860 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:37:47.000866 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:37:47.000871 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:37:47.000877 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 8 00:37:47.000886 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:37:47.000893 kernel: NX (Execute Disable) protection: active Nov 8 00:37:47.000899 kernel: APIC: Static calls initialized Nov 8 00:37:47.000904 kernel: SMBIOS 2.8 present. Nov 8 00:37:47.000911 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 8 00:37:47.000917 kernel: Hypervisor detected: KVM Nov 8 00:37:47.000925 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:37:47.000931 kernel: kvm-clock: using sched offset of 5867621427 cycles Nov 8 00:37:47.000938 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:37:47.000944 kernel: tsc: Detected 1999.997 MHz processor Nov 8 00:37:47.000950 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:37:47.000957 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:37:47.000963 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 8 00:37:47.000969 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:37:47.000975 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:37:47.000984 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 8 00:37:47.000990 kernel: Using GB pages for direct mapping Nov 8 00:37:47.000996 kernel: ACPI: Early table checksum verification disabled Nov 8 00:37:47.001002 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 8 00:37:47.001008 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001015 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001021 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001027 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:37:47.001033 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001042 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001048 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001054 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001064 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 8 00:37:47.001071 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 8 00:37:47.001077 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:37:47.001086 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 8 00:37:47.001093 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 8 00:37:47.001099 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 8 00:37:47.001105 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 8 00:37:47.001112 kernel: No NUMA configuration found Nov 8 00:37:47.001118 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 8 00:37:47.001125 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Nov 8 00:37:47.001131 kernel: Zone ranges: Nov 8 00:37:47.001140 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:37:47.001147 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:37:47.001153 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:37:47.001159 kernel: Movable zone start for each node Nov 8 00:37:47.001166 kernel: Early memory node ranges Nov 8 00:37:47.001172 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:37:47.001178 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 8 00:37:47.001185 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:37:47.001191 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 8 00:37:47.001197 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:37:47.001207 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:37:47.001213 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 8 00:37:47.001220 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:37:47.001226 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:37:47.001233 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:37:47.001239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:37:47.001245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:37:47.001252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:37:47.001258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:37:47.001267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:37:47.001273 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:37:47.001280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:37:47.001287 kernel: TSC deadline timer available Nov 8 00:37:47.001293 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:37:47.001300 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:37:47.001306 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:37:47.001313 kernel: kvm-guest: setup PV sched yield Nov 8 00:37:47.001319 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:37:47.001329 kernel: Booting paravirtualized kernel on KVM Nov 8 00:37:47.001335 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:37:47.001342 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:37:47.001348 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:37:47.001355 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:37:47.001361 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:37:47.001367 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:37:47.001374 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:37:47.001381 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:37:47.001390 kernel: random: crng init done Nov 8 00:37:47.001397 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:37:47.001403 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:37:47.001409 kernel: Fallback order for Node 0: 0 Nov 8 00:37:47.001416 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 8 00:37:47.001422 kernel: Policy zone: Normal Nov 8 00:37:47.001428 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:37:47.001435 kernel: software IO TLB: area num 2. Nov 8 00:37:47.001444 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 227300K reserved, 0K cma-reserved) Nov 8 00:37:47.001451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:37:47.001457 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:37:47.001463 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:37:47.001470 kernel: Dynamic Preempt: voluntary Nov 8 00:37:47.001476 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:37:47.001486 kernel: rcu: RCU event tracing is enabled. Nov 8 00:37:47.001494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:37:47.001500 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:37:47.001510 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:37:47.001517 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:37:47.001523 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:37:47.001530 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:37:47.001536 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:37:47.001543 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:37:47.001549 kernel: Console: colour VGA+ 80x25 Nov 8 00:37:47.001555 kernel: printk: console [tty0] enabled Nov 8 00:37:47.001562 kernel: printk: console [ttyS0] enabled Nov 8 00:37:47.001571 kernel: ACPI: Core revision 20230628 Nov 8 00:37:47.001578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:37:47.001584 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:37:47.001591 kernel: x2apic enabled Nov 8 00:37:47.001606 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:37:47.001616 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:37:47.001622 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:37:47.001629 kernel: kvm-guest: setup PV IPIs Nov 8 00:37:47.001636 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:37:47.001643 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:37:47.001649 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 8 00:37:47.001656 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:37:47.001666 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:37:47.001672 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:37:47.001679 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:37:47.001686 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:37:47.001695 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:37:47.001702 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:37:47.001709 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:37:47.001716 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:37:47.003900 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:37:47.003919 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:37:47.003927 kernel: active return thunk: srso_alias_return_thunk Nov 8 00:37:47.003935 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:37:47.003942 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 8 00:37:47.003954 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:37:47.003961 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:37:47.003967 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:37:47.003974 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:37:47.003981 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:37:47.003988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:37:47.003995 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 8 00:37:47.004002 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 8 00:37:47.004012 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:37:47.004019 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:37:47.004026 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:37:47.004032 kernel: landlock: Up and running. Nov 8 00:37:47.004039 kernel: SELinux: Initializing. Nov 8 00:37:47.004046 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.004053 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.004060 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 8 00:37:47.004067 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:37:47.004077 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:37:47.004084 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:37:47.004091 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:37:47.004097 kernel: ... version: 0 Nov 8 00:37:47.004104 kernel: ... bit width: 48 Nov 8 00:37:47.004111 kernel: ... generic registers: 6 Nov 8 00:37:47.004118 kernel: ... value mask: 0000ffffffffffff Nov 8 00:37:47.004125 kernel: ... max period: 00007fffffffffff Nov 8 00:37:47.004132 kernel: ... fixed-purpose events: 0 Nov 8 00:37:47.004141 kernel: ... event mask: 000000000000003f Nov 8 00:37:47.004148 kernel: signal: max sigframe size: 3376 Nov 8 00:37:47.004155 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:37:47.004162 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:37:47.004169 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:37:47.004176 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:37:47.004183 kernel: .... node #0, CPUs: #1 Nov 8 00:37:47.004190 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:37:47.004196 kernel: smpboot: Max logical packages: 1 Nov 8 00:37:47.004203 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 8 00:37:47.004213 kernel: devtmpfs: initialized Nov 8 00:37:47.004220 kernel: x86/mm: Memory block size: 128MB Nov 8 00:37:47.004227 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:37:47.004234 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:37:47.004241 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:37:47.004247 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:37:47.004254 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:37:47.004261 kernel: audit: type=2000 audit(1762562265.786:1): state=initialized audit_enabled=0 res=1 Nov 8 00:37:47.004268 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:37:47.004277 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:37:47.004284 kernel: cpuidle: using governor menu Nov 8 00:37:47.004291 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:37:47.004298 kernel: dca service started, version 1.12.1 Nov 8 00:37:47.004305 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:37:47.004312 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:37:47.004318 kernel: PCI: Using configuration type 1 for base access Nov 8 00:37:47.004325 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:37:47.004332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:37:47.004342 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:37:47.004349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:37:47.004356 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:37:47.004362 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:37:47.004369 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:37:47.004376 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:37:47.004382 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:37:47.004389 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:37:47.004396 kernel: ACPI: Interpreter enabled Nov 8 00:37:47.004405 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:37:47.004412 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:37:47.004419 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:37:47.004426 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:37:47.004433 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:37:47.004439 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:37:47.004634 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:37:47.004808 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:37:47.005144 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:37:47.005154 kernel: PCI host bridge to bus 0000:00 Nov 8 00:37:47.005300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:37:47.005421 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:37:47.005536 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:37:47.005649 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 00:37:47.007520 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:37:47.007656 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 8 00:37:47.008190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:37:47.008351 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:37:47.008496 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:37:47.008625 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:37:47.009827 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:37:47.009972 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:37:47.010296 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:37:47.010909 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:37:47.011048 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:37:47.011361 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:37:47.011485 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:37:47.011621 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:37:47.011788 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:37:47.011924 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:37:47.012248 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:37:47.012374 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:37:47.012510 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:37:47.012635 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:37:47.016291 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:37:47.016430 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 8 00:37:47.016556 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:37:47.016694 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:37:47.016900 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:37:47.016915 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:37:47.016922 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:37:47.016930 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:37:47.016941 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:37:47.016948 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:37:47.016956 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:37:47.016962 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:37:47.016970 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:37:47.016977 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:37:47.016983 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:37:47.016990 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:37:47.016997 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:37:47.017007 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:37:47.017014 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:37:47.017021 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:37:47.017028 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:37:47.017034 kernel: iommu: Default domain type: Translated Nov 8 00:37:47.017041 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:37:47.017048 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:37:47.017055 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:37:47.017062 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 8 00:37:47.017072 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 8 00:37:47.017201 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:37:47.017327 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:37:47.017453 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:37:47.017463 kernel: vgaarb: loaded Nov 8 00:37:47.017470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:37:47.017477 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:37:47.017484 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:37:47.017495 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:37:47.017503 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:37:47.017510 kernel: pnp: PnP ACPI init Nov 8 00:37:47.017659 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:37:47.017670 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:37:47.017678 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:37:47.017685 kernel: NET: Registered PF_INET protocol family Nov 8 00:37:47.017692 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:37:47.017703 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:37:47.017710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:37:47.017717 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:37:47.018975 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:37:47.018985 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:37:47.018993 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.019000 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.019007 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:37:47.019014 kernel: NET: Registered PF_XDP protocol family Nov 8 00:37:47.019155 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:37:47.019274 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:37:47.019388 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:37:47.019502 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 00:37:47.019615 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:37:47.019773 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 8 00:37:47.019788 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:37:47.019796 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:37:47.019808 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 8 00:37:47.019815 kernel: Initialise system trusted keyrings Nov 8 00:37:47.019823 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:37:47.019830 kernel: Key type asymmetric registered Nov 8 00:37:47.019836 kernel: Asymmetric key parser 'x509' registered Nov 8 00:37:47.019843 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:37:47.019850 kernel: io scheduler mq-deadline registered Nov 8 00:37:47.019857 kernel: io scheduler kyber registered Nov 8 00:37:47.019864 kernel: io scheduler bfq registered Nov 8 00:37:47.019874 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:37:47.019881 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:37:47.019888 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:37:47.019896 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:37:47.019903 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:37:47.019910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:37:47.019917 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:37:47.019924 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:37:47.019931 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:37:47.020262 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:37:47.020384 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:37:47.020501 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:37:46 UTC (1762562266) Nov 8 00:37:47.020617 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:37:47.020626 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:37:47.020633 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:37:47.020640 kernel: Segment Routing with IPv6 Nov 8 00:37:47.020647 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:37:47.020658 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:37:47.020665 kernel: Key type dns_resolver registered Nov 8 00:37:47.020672 kernel: IPI shorthand broadcast: enabled Nov 8 00:37:47.020679 kernel: sched_clock: Marking stable (959004400, 372989578)->(1492240428, -160246450) Nov 8 00:37:47.020686 kernel: registered taskstats version 1 Nov 8 00:37:47.020693 kernel: Loading compiled-in X.509 certificates Nov 8 00:37:47.020701 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:37:47.020707 kernel: Key type .fscrypt registered Nov 8 00:37:47.020714 kernel: Key type fscrypt-provisioning registered Nov 8 00:37:47.021907 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:37:47.021917 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:37:47.021924 kernel: ima: No architecture policies found Nov 8 00:37:47.021931 kernel: clk: Disabling unused clocks Nov 8 00:37:47.021938 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:37:47.021945 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:37:47.021952 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:37:47.021958 kernel: Run /init as init process Nov 8 00:37:47.021966 kernel: with arguments: Nov 8 00:37:47.021978 kernel: /init Nov 8 00:37:47.021984 kernel: with environment: Nov 8 00:37:47.021991 kernel: HOME=/ Nov 8 00:37:47.021998 kernel: TERM=linux Nov 8 00:37:47.022007 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:37:47.022016 systemd[1]: Detected virtualization kvm. Nov 8 00:37:47.022024 systemd[1]: Detected architecture x86-64. Nov 8 00:37:47.022031 systemd[1]: Running in initrd. Nov 8 00:37:47.022042 systemd[1]: No hostname configured, using default hostname. Nov 8 00:37:47.022049 systemd[1]: Hostname set to . Nov 8 00:37:47.022057 systemd[1]: Initializing machine ID from random generator. Nov 8 00:37:47.022064 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:37:47.022071 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:37:47.022281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:37:47.022292 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:37:47.022300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:37:47.022308 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:37:47.022315 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:37:47.022325 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:37:47.022333 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:37:47.022343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:37:47.022351 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:37:47.022359 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:37:47.022366 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:37:47.022374 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:37:47.022381 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:37:47.022388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:37:47.022396 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:37:47.022403 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:37:47.022414 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:37:47.022421 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:37:47.022429 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:37:47.022437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:37:47.022444 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:37:47.022452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:37:47.022459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:37:47.022467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:37:47.022477 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:37:47.022485 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:37:47.022492 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:37:47.022522 systemd-journald[177]: Collecting audit messages is disabled. Nov 8 00:37:47.022545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:37:47.022553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:37:47.022563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:37:47.022571 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:37:47.022582 systemd-journald[177]: Journal started Nov 8 00:37:47.022599 systemd-journald[177]: Runtime Journal (/run/log/journal/e815b04b247649c395c02c8ea34d3bea) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:37:47.025499 systemd-modules-load[178]: Inserted module 'overlay' Nov 8 00:37:47.129205 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:37:47.129238 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:37:47.129253 kernel: Bridge firewalling registered Nov 8 00:37:47.057079 systemd-modules-load[178]: Inserted module 'br_netfilter' Nov 8 00:37:47.135769 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:37:47.137349 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:37:47.140302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:47.141563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:37:47.151904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:37:47.155346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:37:47.163580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:37:47.193925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:37:47.197474 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:47.201992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:37:47.212914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:37:47.216481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:37:47.218669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:37:47.228968 dracut-cmdline[208]: dracut-dracut-053 Nov 8 00:37:47.230472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:37:47.233837 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:37:47.263769 systemd-resolved[215]: Positive Trust Anchors: Nov 8 00:37:47.264911 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:37:47.264941 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:37:47.272571 systemd-resolved[215]: Defaulting to hostname 'linux'. Nov 8 00:37:47.273780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:37:47.275143 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:37:47.318766 kernel: SCSI subsystem initialized Nov 8 00:37:47.329917 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:37:47.342784 kernel: iscsi: registered transport (tcp) Nov 8 00:37:47.366199 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:37:47.366264 kernel: QLogic iSCSI HBA Driver Nov 8 00:37:47.428223 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:37:47.434989 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:37:47.468169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:37:47.468223 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:37:47.469201 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:37:47.517783 kernel: raid6: avx2x4 gen() 30909 MB/s Nov 8 00:37:47.535770 kernel: raid6: avx2x2 gen() 29600 MB/s Nov 8 00:37:47.554109 kernel: raid6: avx2x1 gen() 25123 MB/s Nov 8 00:37:47.554155 kernel: raid6: using algorithm avx2x4 gen() 30909 MB/s Nov 8 00:37:47.575697 kernel: raid6: .... xor() 5036 MB/s, rmw enabled Nov 8 00:37:47.575792 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:37:47.596775 kernel: xor: automatically using best checksumming function avx Nov 8 00:37:47.736799 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:37:47.753164 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:37:47.764950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:37:47.778973 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 8 00:37:47.784350 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:37:47.795896 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:37:47.812760 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Nov 8 00:37:47.852030 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:37:47.857931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:37:47.934231 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:37:47.945279 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:37:47.964023 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:37:47.968123 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:37:47.970622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:37:47.973044 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:37:47.980190 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:37:48.007405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:37:48.035816 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:37:48.174796 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:37:48.182758 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:37:48.182861 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:37:48.183008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:48.208353 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:37:48.209534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:37:48.209694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:48.210968 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:37:48.226978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:37:48.259216 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:37:48.259271 kernel: AES CTR mode by8 optimization enabled Nov 8 00:37:48.259283 kernel: libata version 3.00 loaded. Nov 8 00:37:48.268146 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:37:48.268490 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:37:48.269764 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:37:48.270035 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:37:48.275595 kernel: scsi host1: ahci Nov 8 00:37:48.276071 kernel: scsi host2: ahci Nov 8 00:37:48.279022 kernel: scsi host3: ahci Nov 8 00:37:48.280778 kernel: scsi host4: ahci Nov 8 00:37:48.282750 kernel: scsi host5: ahci Nov 8 00:37:48.283901 kernel: scsi host6: ahci Nov 8 00:37:48.284113 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Nov 8 00:37:48.284127 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Nov 8 00:37:48.284136 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Nov 8 00:37:48.284146 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Nov 8 00:37:48.284155 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Nov 8 00:37:48.284171 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Nov 8 00:37:48.398602 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:48.408895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:37:48.422360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:48.596752 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.596878 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.600624 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.603756 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.606742 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.611505 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.625483 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:37:48.656009 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 8 00:37:48.656277 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:37:48.656458 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:37:48.656642 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:37:48.665967 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:37:48.665994 kernel: GPT:9289727 != 167739391 Nov 8 00:37:48.667773 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:37:48.670142 kernel: GPT:9289727 != 167739391 Nov 8 00:37:48.673353 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:37:48.673376 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:48.678755 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:37:48.719003 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (440) Nov 8 00:37:48.724750 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (439) Nov 8 00:37:48.725428 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:37:48.739385 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:37:48.748450 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:37:48.751290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:37:48.758572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:37:48.771985 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:37:48.777780 disk-uuid[567]: Primary Header is updated. Nov 8 00:37:48.777780 disk-uuid[567]: Secondary Entries is updated. Nov 8 00:37:48.777780 disk-uuid[567]: Secondary Header is updated. Nov 8 00:37:48.785770 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:48.794765 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:49.804029 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:49.807580 disk-uuid[568]: The operation has completed successfully. Nov 8 00:37:49.864541 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:37:49.864706 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:37:49.877882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:37:49.884769 sh[582]: Success Nov 8 00:37:49.901822 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:37:49.962627 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:37:49.972918 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:37:49.977607 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:37:50.007325 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:37:50.007365 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:50.010507 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:37:50.013871 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:37:50.018046 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:37:50.027744 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:37:50.029700 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:37:50.031270 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:37:50.042946 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:37:50.048070 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:37:50.067137 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:50.067245 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:50.067263 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:37:50.074334 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:37:50.074378 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:37:50.091160 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:37:50.095187 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:50.101364 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:37:50.109936 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:37:50.206304 ignition[692]: Ignition 2.19.0 Nov 8 00:37:50.206325 ignition[692]: Stage: fetch-offline Nov 8 00:37:50.206371 ignition[692]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:50.206384 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:50.206499 ignition[692]: parsed url from cmdline: "" Nov 8 00:37:50.206506 ignition[692]: no config URL provided Nov 8 00:37:50.206513 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:37:50.206525 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:37:50.212640 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:37:50.206531 ignition[692]: failed to fetch config: resource requires networking Nov 8 00:37:50.206701 ignition[692]: Ignition finished successfully Nov 8 00:37:50.219258 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:37:50.228929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:37:50.249601 systemd-networkd[770]: lo: Link UP Nov 8 00:37:50.249618 systemd-networkd[770]: lo: Gained carrier Nov 8 00:37:50.251414 systemd-networkd[770]: Enumeration completed Nov 8 00:37:50.252031 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:37:50.252036 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:37:50.253409 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:37:50.254626 systemd-networkd[770]: eth0: Link UP Nov 8 00:37:50.254631 systemd-networkd[770]: eth0: Gained carrier Nov 8 00:37:50.254638 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:37:50.256317 systemd[1]: Reached target network.target - Network. Nov 8 00:37:50.266877 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:37:50.283078 ignition[772]: Ignition 2.19.0 Nov 8 00:37:50.283099 ignition[772]: Stage: fetch Nov 8 00:37:50.283314 ignition[772]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:50.283328 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:50.283426 ignition[772]: parsed url from cmdline: "" Nov 8 00:37:50.283431 ignition[772]: no config URL provided Nov 8 00:37:50.283436 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:37:50.283448 ignition[772]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:37:50.283494 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 8 00:37:50.283832 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:37:50.484294 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 8 00:37:50.484812 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:37:50.884972 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 8 00:37:50.885117 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:37:50.908787 systemd-networkd[770]: eth0: DHCPv4 address 172.239.57.26/24, gateway 172.239.57.1 acquired from 23.213.15.222 Nov 8 00:37:51.685230 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 8 00:37:51.777300 ignition[772]: PUT result: OK Nov 8 00:37:51.777352 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 8 00:37:51.892949 ignition[772]: GET result: OK Nov 8 00:37:51.893115 ignition[772]: parsing config with SHA512: c941f28218ade2e22f63aa51590bbb92f25a4e112b47d36bfb367a874ce68e1913cf4adb3f5f32e847a324dcde61f6b1f2ac436ded5ccf173d1a13090e3da1bd Nov 8 00:37:51.903452 unknown[772]: fetched base config from "system" Nov 8 00:37:51.903520 unknown[772]: fetched base config from "system" Nov 8 00:37:51.904038 ignition[772]: fetch: fetch complete Nov 8 00:37:51.903530 unknown[772]: fetched user config from "akamai" Nov 8 00:37:51.904049 ignition[772]: fetch: fetch passed Nov 8 00:37:51.907936 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:37:51.904899 ignition[772]: Ignition finished successfully Nov 8 00:37:51.915941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:37:51.941610 ignition[781]: Ignition 2.19.0 Nov 8 00:37:51.941631 ignition[781]: Stage: kargs Nov 8 00:37:51.941842 ignition[781]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:51.941857 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:51.945141 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:37:51.942794 ignition[781]: kargs: kargs passed Nov 8 00:37:51.942842 ignition[781]: Ignition finished successfully Nov 8 00:37:51.952060 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:37:51.968375 ignition[788]: Ignition 2.19.0 Nov 8 00:37:51.968385 ignition[788]: Stage: disks Nov 8 00:37:51.968557 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:51.968571 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:51.970994 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:37:51.969474 ignition[788]: disks: disks passed Nov 8 00:37:51.994276 systemd-networkd[770]: eth0: Gained IPv6LL Nov 8 00:37:51.969516 ignition[788]: Ignition finished successfully Nov 8 00:37:51.998361 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:37:51.999464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:37:52.001467 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:37:52.003228 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:37:52.005390 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:37:52.016932 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:37:52.036816 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:37:52.041833 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:37:52.051834 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:37:52.148808 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:37:52.149576 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:37:52.152019 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:37:52.158827 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:37:52.162825 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:37:52.165544 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:37:52.165595 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:37:52.165618 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:37:52.190108 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Nov 8 00:37:52.190131 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:52.190150 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:52.190162 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:37:52.190173 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:37:52.190183 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:37:52.177185 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:37:52.193776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:37:52.201879 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:37:52.248768 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:37:52.255181 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:37:52.260589 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:37:52.268119 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:37:52.378970 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:37:52.384832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:37:52.387925 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:37:52.398483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:37:52.403313 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:52.430424 ignition[919]: INFO : Ignition 2.19.0 Nov 8 00:37:52.430424 ignition[919]: INFO : Stage: mount Nov 8 00:37:52.432692 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:52.432692 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:52.432692 ignition[919]: INFO : mount: mount passed Nov 8 00:37:52.432692 ignition[919]: INFO : Ignition finished successfully Nov 8 00:37:52.434149 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:37:52.436603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:37:52.462876 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:37:53.154867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:37:53.169753 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (931) Nov 8 00:37:53.174241 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:53.174274 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:53.177097 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:37:53.186116 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:37:53.186143 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:37:53.188740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:37:53.207518 ignition[947]: INFO : Ignition 2.19.0 Nov 8 00:37:53.207518 ignition[947]: INFO : Stage: files Nov 8 00:37:53.209622 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:53.209622 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:53.209622 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:37:53.209622 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:37:53.209622 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:37:53.212899 unknown[947]: wrote ssh authorized keys file for user: core Nov 8 00:37:53.494714 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:37:53.632810 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:37:54.208381 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:37:54.672876 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:54.672876 ignition[947]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:37:54.699409 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:37:54.699409 ignition[947]: INFO : files: files passed Nov 8 00:37:54.699409 ignition[947]: INFO : Ignition finished successfully Nov 8 00:37:54.687301 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:37:54.707606 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:37:54.713859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:37:54.715171 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:37:54.734353 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:37:54.734353 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:37:54.715272 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:37:54.738208 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:37:54.729999 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:37:54.732091 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:37:54.742821 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:37:54.767900 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:37:54.768035 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:37:54.770293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:37:54.771905 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:37:54.773870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:37:54.783876 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:37:54.795898 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:37:54.800866 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:37:54.811412 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:37:54.812428 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:37:54.814451 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:37:54.816385 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:37:54.816483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:37:54.818830 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:37:54.820012 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:37:54.821937 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:37:54.823807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:37:54.825583 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:37:54.827587 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:37:54.829649 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:37:54.831763 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:37:54.833670 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:37:54.834575 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:37:54.837463 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:37:54.837625 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:37:54.840235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:37:54.841482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:37:54.843260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:37:54.843366 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:37:54.845104 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:37:54.845251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:37:54.847621 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:37:54.847814 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:37:54.848988 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:37:54.849122 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:37:54.858794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:37:54.863711 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:37:54.865556 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:37:54.866881 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:37:54.869110 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:37:54.870192 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:37:54.876859 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:37:54.876966 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:37:54.880927 ignition[1001]: INFO : Ignition 2.19.0 Nov 8 00:37:54.880927 ignition[1001]: INFO : Stage: umount Nov 8 00:37:54.880927 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:54.880927 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:54.885754 ignition[1001]: INFO : umount: umount passed Nov 8 00:37:54.885754 ignition[1001]: INFO : Ignition finished successfully Nov 8 00:37:54.889141 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:37:54.889357 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:37:54.891428 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:37:54.891485 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:37:54.892969 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:37:54.893020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:37:54.916590 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:37:54.916643 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:37:54.918392 systemd[1]: Stopped target network.target - Network. Nov 8 00:37:54.920061 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:37:54.920115 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:37:54.922078 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:37:54.923810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:37:54.926124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:37:54.927667 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:37:54.929651 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:37:54.931631 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:37:54.931677 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:37:54.933341 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:37:54.933386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:37:54.935280 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:37:54.935332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:37:54.937685 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:37:54.937761 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:37:54.939583 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:37:54.941516 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:37:54.944292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:37:54.944823 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:37:54.944964 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:37:54.945770 systemd-networkd[770]: eth0: DHCPv6 lease lost Nov 8 00:37:54.948542 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:37:54.948697 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:37:54.951002 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:37:54.951113 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:37:54.956249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:37:54.956306 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:37:54.958263 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:37:54.958317 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:37:54.965837 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:37:54.968795 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:37:54.968852 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:37:54.970925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:37:54.970975 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:37:54.973063 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:37:54.973114 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:37:54.975364 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:37:54.975412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:37:54.977160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:37:54.989626 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:37:54.989866 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:37:54.994260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:37:54.994384 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:37:54.995554 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:37:54.995597 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:37:54.997762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:37:54.997814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:37:55.000521 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:37:55.000570 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:37:55.002636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:37:55.002684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:55.012857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:37:55.016370 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:37:55.016431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:37:55.017417 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:37:55.017467 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:37:55.019420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:37:55.019470 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:37:55.023036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:37:55.023090 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:55.025601 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:37:55.025715 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:37:55.027280 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:37:55.027377 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:37:55.029915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:37:55.036883 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:37:55.044541 systemd[1]: Switching root. Nov 8 00:37:55.060599 systemd-journald[177]: Journal stopped Nov 8 00:37:47.000783 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:37:47.000807 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:37:47.000816 kernel: BIOS-provided physical RAM map: Nov 8 00:37:47.000822 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 8 00:37:47.000827 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 8 00:37:47.000836 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:37:47.000843 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 8 00:37:47.000849 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 8 00:37:47.000854 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:37:47.000860 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:37:47.000866 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:37:47.000871 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:37:47.000877 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 8 00:37:47.000886 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:37:47.000893 kernel: NX (Execute Disable) protection: active Nov 8 00:37:47.000899 kernel: APIC: Static calls initialized Nov 8 00:37:47.000904 kernel: SMBIOS 2.8 present. Nov 8 00:37:47.000911 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 8 00:37:47.000917 kernel: Hypervisor detected: KVM Nov 8 00:37:47.000925 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:37:47.000931 kernel: kvm-clock: using sched offset of 5867621427 cycles Nov 8 00:37:47.000938 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:37:47.000944 kernel: tsc: Detected 1999.997 MHz processor Nov 8 00:37:47.000950 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:37:47.000957 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:37:47.000963 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 8 00:37:47.000969 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:37:47.000975 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:37:47.000984 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 8 00:37:47.000990 kernel: Using GB pages for direct mapping Nov 8 00:37:47.000996 kernel: ACPI: Early table checksum verification disabled Nov 8 00:37:47.001002 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 8 00:37:47.001008 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001015 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001021 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001027 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:37:47.001033 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001042 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001048 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001054 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:37:47.001064 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 8 00:37:47.001071 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 8 00:37:47.001077 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:37:47.001086 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 8 00:37:47.001093 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 8 00:37:47.001099 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 8 00:37:47.001105 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 8 00:37:47.001112 kernel: No NUMA configuration found Nov 8 00:37:47.001118 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 8 00:37:47.001125 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Nov 8 00:37:47.001131 kernel: Zone ranges: Nov 8 00:37:47.001140 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:37:47.001147 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:37:47.001153 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:37:47.001159 kernel: Movable zone start for each node Nov 8 00:37:47.001166 kernel: Early memory node ranges Nov 8 00:37:47.001172 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:37:47.001178 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 8 00:37:47.001185 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:37:47.001191 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 8 00:37:47.001197 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:37:47.001207 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:37:47.001213 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 8 00:37:47.001220 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:37:47.001226 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:37:47.001233 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:37:47.001239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:37:47.001245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:37:47.001252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:37:47.001258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:37:47.001267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:37:47.001273 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:37:47.001280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:37:47.001287 kernel: TSC deadline timer available Nov 8 00:37:47.001293 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:37:47.001300 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:37:47.001306 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:37:47.001313 kernel: kvm-guest: setup PV sched yield Nov 8 00:37:47.001319 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:37:47.001329 kernel: Booting paravirtualized kernel on KVM Nov 8 00:37:47.001335 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:37:47.001342 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:37:47.001348 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:37:47.001355 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:37:47.001361 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:37:47.001367 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:37:47.001374 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:37:47.001381 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:37:47.001390 kernel: random: crng init done Nov 8 00:37:47.001397 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:37:47.001403 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:37:47.001409 kernel: Fallback order for Node 0: 0 Nov 8 00:37:47.001416 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 8 00:37:47.001422 kernel: Policy zone: Normal Nov 8 00:37:47.001428 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:37:47.001435 kernel: software IO TLB: area num 2. Nov 8 00:37:47.001444 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 227300K reserved, 0K cma-reserved) Nov 8 00:37:47.001451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:37:47.001457 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:37:47.001463 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:37:47.001470 kernel: Dynamic Preempt: voluntary Nov 8 00:37:47.001476 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:37:47.001486 kernel: rcu: RCU event tracing is enabled. Nov 8 00:37:47.001494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:37:47.001500 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:37:47.001510 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:37:47.001517 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:37:47.001523 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:37:47.001530 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:37:47.001536 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:37:47.001543 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:37:47.001549 kernel: Console: colour VGA+ 80x25 Nov 8 00:37:47.001555 kernel: printk: console [tty0] enabled Nov 8 00:37:47.001562 kernel: printk: console [ttyS0] enabled Nov 8 00:37:47.001571 kernel: ACPI: Core revision 20230628 Nov 8 00:37:47.001578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:37:47.001584 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:37:47.001591 kernel: x2apic enabled Nov 8 00:37:47.001606 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:37:47.001616 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:37:47.001622 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:37:47.001629 kernel: kvm-guest: setup PV IPIs Nov 8 00:37:47.001636 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:37:47.001643 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:37:47.001649 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 8 00:37:47.001656 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:37:47.001666 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:37:47.001672 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:37:47.001679 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:37:47.001686 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:37:47.001695 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:37:47.001702 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:37:47.001709 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:37:47.001716 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:37:47.003900 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:37:47.003919 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:37:47.003927 kernel: active return thunk: srso_alias_return_thunk Nov 8 00:37:47.003935 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:37:47.003942 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 8 00:37:47.003954 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:37:47.003961 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:37:47.003967 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:37:47.003974 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:37:47.003981 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:37:47.003988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:37:47.003995 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 8 00:37:47.004002 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 8 00:37:47.004012 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:37:47.004019 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:37:47.004026 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:37:47.004032 kernel: landlock: Up and running. Nov 8 00:37:47.004039 kernel: SELinux: Initializing. Nov 8 00:37:47.004046 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.004053 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.004060 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 8 00:37:47.004067 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:37:47.004077 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:37:47.004084 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:37:47.004091 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:37:47.004097 kernel: ... version: 0 Nov 8 00:37:47.004104 kernel: ... bit width: 48 Nov 8 00:37:47.004111 kernel: ... generic registers: 6 Nov 8 00:37:47.004118 kernel: ... value mask: 0000ffffffffffff Nov 8 00:37:47.004125 kernel: ... max period: 00007fffffffffff Nov 8 00:37:47.004132 kernel: ... fixed-purpose events: 0 Nov 8 00:37:47.004141 kernel: ... event mask: 000000000000003f Nov 8 00:37:47.004148 kernel: signal: max sigframe size: 3376 Nov 8 00:37:47.004155 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:37:47.004162 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:37:47.004169 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:37:47.004176 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:37:47.004183 kernel: .... node #0, CPUs: #1 Nov 8 00:37:47.004190 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:37:47.004196 kernel: smpboot: Max logical packages: 1 Nov 8 00:37:47.004203 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 8 00:37:47.004213 kernel: devtmpfs: initialized Nov 8 00:37:47.004220 kernel: x86/mm: Memory block size: 128MB Nov 8 00:37:47.004227 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:37:47.004234 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:37:47.004241 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:37:47.004247 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:37:47.004254 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:37:47.004261 kernel: audit: type=2000 audit(1762562265.786:1): state=initialized audit_enabled=0 res=1 Nov 8 00:37:47.004268 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:37:47.004277 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:37:47.004284 kernel: cpuidle: using governor menu Nov 8 00:37:47.004291 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:37:47.004298 kernel: dca service started, version 1.12.1 Nov 8 00:37:47.004305 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:37:47.004312 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:37:47.004318 kernel: PCI: Using configuration type 1 for base access Nov 8 00:37:47.004325 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:37:47.004332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:37:47.004342 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:37:47.004349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:37:47.004356 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:37:47.004362 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:37:47.004369 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:37:47.004376 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:37:47.004382 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:37:47.004389 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:37:47.004396 kernel: ACPI: Interpreter enabled Nov 8 00:37:47.004405 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:37:47.004412 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:37:47.004419 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:37:47.004426 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:37:47.004433 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:37:47.004439 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:37:47.004634 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:37:47.004808 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:37:47.005144 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:37:47.005154 kernel: PCI host bridge to bus 0000:00 Nov 8 00:37:47.005300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:37:47.005421 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:37:47.005536 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:37:47.005649 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 00:37:47.007520 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:37:47.007656 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 8 00:37:47.008190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:37:47.008351 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:37:47.008496 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:37:47.008625 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:37:47.009827 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:37:47.009972 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:37:47.010296 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:37:47.010909 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:37:47.011048 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:37:47.011361 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:37:47.011485 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:37:47.011621 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:37:47.011788 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:37:47.011924 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:37:47.012248 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:37:47.012374 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:37:47.012510 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:37:47.012635 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:37:47.016291 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:37:47.016430 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 8 00:37:47.016556 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:37:47.016694 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:37:47.016900 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:37:47.016915 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:37:47.016922 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:37:47.016930 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:37:47.016941 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:37:47.016948 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:37:47.016956 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:37:47.016962 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:37:47.016970 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:37:47.016977 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:37:47.016983 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:37:47.016990 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:37:47.016997 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:37:47.017007 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:37:47.017014 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:37:47.017021 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:37:47.017028 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:37:47.017034 kernel: iommu: Default domain type: Translated Nov 8 00:37:47.017041 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:37:47.017048 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:37:47.017055 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:37:47.017062 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 8 00:37:47.017072 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 8 00:37:47.017201 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:37:47.017327 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:37:47.017453 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:37:47.017463 kernel: vgaarb: loaded Nov 8 00:37:47.017470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:37:47.017477 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:37:47.017484 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:37:47.017495 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:37:47.017503 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:37:47.017510 kernel: pnp: PnP ACPI init Nov 8 00:37:47.017659 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:37:47.017670 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:37:47.017678 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:37:47.017685 kernel: NET: Registered PF_INET protocol family Nov 8 00:37:47.017692 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:37:47.017703 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:37:47.017710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:37:47.017717 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:37:47.018975 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:37:47.018985 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:37:47.018993 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.019000 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:37:47.019007 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:37:47.019014 kernel: NET: Registered PF_XDP protocol family Nov 8 00:37:47.019155 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:37:47.019274 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:37:47.019388 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:37:47.019502 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 00:37:47.019615 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:37:47.019773 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 8 00:37:47.019788 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:37:47.019796 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:37:47.019808 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 8 00:37:47.019815 kernel: Initialise system trusted keyrings Nov 8 00:37:47.019823 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:37:47.019830 kernel: Key type asymmetric registered Nov 8 00:37:47.019836 kernel: Asymmetric key parser 'x509' registered Nov 8 00:37:47.019843 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:37:47.019850 kernel: io scheduler mq-deadline registered Nov 8 00:37:47.019857 kernel: io scheduler kyber registered Nov 8 00:37:47.019864 kernel: io scheduler bfq registered Nov 8 00:37:47.019874 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:37:47.019881 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:37:47.019888 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:37:47.019896 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:37:47.019903 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:37:47.019910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:37:47.019917 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:37:47.019924 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:37:47.019931 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:37:47.020262 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:37:47.020384 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:37:47.020501 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:37:46 UTC (1762562266) Nov 8 00:37:47.020617 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:37:47.020626 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:37:47.020633 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:37:47.020640 kernel: Segment Routing with IPv6 Nov 8 00:37:47.020647 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:37:47.020658 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:37:47.020665 kernel: Key type dns_resolver registered Nov 8 00:37:47.020672 kernel: IPI shorthand broadcast: enabled Nov 8 00:37:47.020679 kernel: sched_clock: Marking stable (959004400, 372989578)->(1492240428, -160246450) Nov 8 00:37:47.020686 kernel: registered taskstats version 1 Nov 8 00:37:47.020693 kernel: Loading compiled-in X.509 certificates Nov 8 00:37:47.020701 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:37:47.020707 kernel: Key type .fscrypt registered Nov 8 00:37:47.020714 kernel: Key type fscrypt-provisioning registered Nov 8 00:37:47.021907 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:37:47.021917 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:37:47.021924 kernel: ima: No architecture policies found Nov 8 00:37:47.021931 kernel: clk: Disabling unused clocks Nov 8 00:37:47.021938 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:37:47.021945 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:37:47.021952 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:37:47.021958 kernel: Run /init as init process Nov 8 00:37:47.021966 kernel: with arguments: Nov 8 00:37:47.021978 kernel: /init Nov 8 00:37:47.021984 kernel: with environment: Nov 8 00:37:47.021991 kernel: HOME=/ Nov 8 00:37:47.021998 kernel: TERM=linux Nov 8 00:37:47.022007 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:37:47.022016 systemd[1]: Detected virtualization kvm. Nov 8 00:37:47.022024 systemd[1]: Detected architecture x86-64. Nov 8 00:37:47.022031 systemd[1]: Running in initrd. Nov 8 00:37:47.022042 systemd[1]: No hostname configured, using default hostname. Nov 8 00:37:47.022049 systemd[1]: Hostname set to . Nov 8 00:37:47.022057 systemd[1]: Initializing machine ID from random generator. Nov 8 00:37:47.022064 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:37:47.022071 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:37:47.022281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:37:47.022292 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:37:47.022300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:37:47.022308 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:37:47.022315 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:37:47.022325 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:37:47.022333 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:37:47.022343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:37:47.022351 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:37:47.022359 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:37:47.022366 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:37:47.022374 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:37:47.022381 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:37:47.022388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:37:47.022396 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:37:47.022403 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:37:47.022414 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:37:47.022421 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:37:47.022429 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:37:47.022437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:37:47.022444 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:37:47.022452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:37:47.022459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:37:47.022467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:37:47.022477 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:37:47.022485 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:37:47.022492 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:37:47.022522 systemd-journald[177]: Collecting audit messages is disabled. Nov 8 00:37:47.022545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:37:47.022553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:37:47.022563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:37:47.022571 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:37:47.022582 systemd-journald[177]: Journal started Nov 8 00:37:47.022599 systemd-journald[177]: Runtime Journal (/run/log/journal/e815b04b247649c395c02c8ea34d3bea) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:37:47.025499 systemd-modules-load[178]: Inserted module 'overlay' Nov 8 00:37:47.129205 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:37:47.129238 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:37:47.129253 kernel: Bridge firewalling registered Nov 8 00:37:47.057079 systemd-modules-load[178]: Inserted module 'br_netfilter' Nov 8 00:37:47.135769 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:37:47.137349 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:37:47.140302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:47.141563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:37:47.151904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:37:47.155346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:37:47.163580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:37:47.193925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:37:47.197474 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:47.201992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:37:47.212914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:37:47.216481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:37:47.218669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:37:47.228968 dracut-cmdline[208]: dracut-dracut-053 Nov 8 00:37:47.230472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:37:47.233837 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:37:47.263769 systemd-resolved[215]: Positive Trust Anchors: Nov 8 00:37:47.264911 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:37:47.264941 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:37:47.272571 systemd-resolved[215]: Defaulting to hostname 'linux'. Nov 8 00:37:47.273780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:37:47.275143 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:37:47.318766 kernel: SCSI subsystem initialized Nov 8 00:37:47.329917 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:37:47.342784 kernel: iscsi: registered transport (tcp) Nov 8 00:37:47.366199 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:37:47.366264 kernel: QLogic iSCSI HBA Driver Nov 8 00:37:47.428223 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:37:47.434989 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:37:47.468169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:37:47.468223 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:37:47.469201 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:37:47.517783 kernel: raid6: avx2x4 gen() 30909 MB/s Nov 8 00:37:47.535770 kernel: raid6: avx2x2 gen() 29600 MB/s Nov 8 00:37:47.554109 kernel: raid6: avx2x1 gen() 25123 MB/s Nov 8 00:37:47.554155 kernel: raid6: using algorithm avx2x4 gen() 30909 MB/s Nov 8 00:37:47.575697 kernel: raid6: .... xor() 5036 MB/s, rmw enabled Nov 8 00:37:47.575792 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:37:47.596775 kernel: xor: automatically using best checksumming function avx Nov 8 00:37:47.736799 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:37:47.753164 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:37:47.764950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:37:47.778973 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 8 00:37:47.784350 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:37:47.795896 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:37:47.812760 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Nov 8 00:37:47.852030 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:37:47.857931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:37:47.934231 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:37:47.945279 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:37:47.964023 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:37:47.968123 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:37:47.970622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:37:47.973044 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:37:47.980190 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:37:48.007405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:37:48.035816 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:37:48.174796 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:37:48.182758 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:37:48.182861 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:37:48.183008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:48.208353 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:37:48.209534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:37:48.209694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:48.210968 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:37:48.226978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:37:48.259216 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:37:48.259271 kernel: AES CTR mode by8 optimization enabled Nov 8 00:37:48.259283 kernel: libata version 3.00 loaded. Nov 8 00:37:48.268146 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:37:48.268490 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:37:48.269764 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:37:48.270035 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:37:48.275595 kernel: scsi host1: ahci Nov 8 00:37:48.276071 kernel: scsi host2: ahci Nov 8 00:37:48.279022 kernel: scsi host3: ahci Nov 8 00:37:48.280778 kernel: scsi host4: ahci Nov 8 00:37:48.282750 kernel: scsi host5: ahci Nov 8 00:37:48.283901 kernel: scsi host6: ahci Nov 8 00:37:48.284113 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Nov 8 00:37:48.284127 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Nov 8 00:37:48.284136 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Nov 8 00:37:48.284146 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Nov 8 00:37:48.284155 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Nov 8 00:37:48.284171 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Nov 8 00:37:48.398602 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:48.408895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:37:48.422360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:48.596752 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.596878 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.600624 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.603756 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.606742 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.611505 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:37:48.625483 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:37:48.656009 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 8 00:37:48.656277 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:37:48.656458 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:37:48.656642 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:37:48.665967 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:37:48.665994 kernel: GPT:9289727 != 167739391 Nov 8 00:37:48.667773 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:37:48.670142 kernel: GPT:9289727 != 167739391 Nov 8 00:37:48.673353 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:37:48.673376 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:48.678755 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:37:48.719003 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (440) Nov 8 00:37:48.724750 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (439) Nov 8 00:37:48.725428 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:37:48.739385 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:37:48.748450 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:37:48.751290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:37:48.758572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:37:48.771985 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:37:48.777780 disk-uuid[567]: Primary Header is updated. Nov 8 00:37:48.777780 disk-uuid[567]: Secondary Entries is updated. Nov 8 00:37:48.777780 disk-uuid[567]: Secondary Header is updated. Nov 8 00:37:48.785770 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:48.794765 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:49.804029 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:37:49.807580 disk-uuid[568]: The operation has completed successfully. Nov 8 00:37:49.864541 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:37:49.864706 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:37:49.877882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:37:49.884769 sh[582]: Success Nov 8 00:37:49.901822 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:37:49.962627 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:37:49.972918 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:37:49.977607 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:37:50.007325 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:37:50.007365 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:50.010507 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:37:50.013871 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:37:50.018046 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:37:50.027744 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:37:50.029700 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:37:50.031270 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:37:50.042946 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:37:50.048070 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:37:50.067137 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:50.067245 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:50.067263 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:37:50.074334 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:37:50.074378 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:37:50.091160 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:37:50.095187 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:50.101364 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:37:50.109936 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:37:50.206304 ignition[692]: Ignition 2.19.0 Nov 8 00:37:50.206325 ignition[692]: Stage: fetch-offline Nov 8 00:37:50.206371 ignition[692]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:50.206384 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:50.206499 ignition[692]: parsed url from cmdline: "" Nov 8 00:37:50.206506 ignition[692]: no config URL provided Nov 8 00:37:50.206513 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:37:50.206525 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:37:50.212640 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:37:50.206531 ignition[692]: failed to fetch config: resource requires networking Nov 8 00:37:50.206701 ignition[692]: Ignition finished successfully Nov 8 00:37:50.219258 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:37:50.228929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:37:50.249601 systemd-networkd[770]: lo: Link UP Nov 8 00:37:50.249618 systemd-networkd[770]: lo: Gained carrier Nov 8 00:37:50.251414 systemd-networkd[770]: Enumeration completed Nov 8 00:37:50.252031 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:37:50.252036 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:37:50.253409 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:37:50.254626 systemd-networkd[770]: eth0: Link UP Nov 8 00:37:50.254631 systemd-networkd[770]: eth0: Gained carrier Nov 8 00:37:50.254638 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:37:50.256317 systemd[1]: Reached target network.target - Network. Nov 8 00:37:50.266877 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:37:50.283078 ignition[772]: Ignition 2.19.0 Nov 8 00:37:50.283099 ignition[772]: Stage: fetch Nov 8 00:37:50.283314 ignition[772]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:50.283328 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:50.283426 ignition[772]: parsed url from cmdline: "" Nov 8 00:37:50.283431 ignition[772]: no config URL provided Nov 8 00:37:50.283436 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:37:50.283448 ignition[772]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:37:50.283494 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 8 00:37:50.283832 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:37:50.484294 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 8 00:37:50.484812 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:37:50.884972 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 8 00:37:50.885117 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:37:50.908787 systemd-networkd[770]: eth0: DHCPv4 address 172.239.57.26/24, gateway 172.239.57.1 acquired from 23.213.15.222 Nov 8 00:37:51.685230 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 8 00:37:51.777300 ignition[772]: PUT result: OK Nov 8 00:37:51.777352 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 8 00:37:51.892949 ignition[772]: GET result: OK Nov 8 00:37:51.893115 ignition[772]: parsing config with SHA512: c941f28218ade2e22f63aa51590bbb92f25a4e112b47d36bfb367a874ce68e1913cf4adb3f5f32e847a324dcde61f6b1f2ac436ded5ccf173d1a13090e3da1bd Nov 8 00:37:51.903452 unknown[772]: fetched base config from "system" Nov 8 00:37:51.903520 unknown[772]: fetched base config from "system" Nov 8 00:37:51.904038 ignition[772]: fetch: fetch complete Nov 8 00:37:51.903530 unknown[772]: fetched user config from "akamai" Nov 8 00:37:51.904049 ignition[772]: fetch: fetch passed Nov 8 00:37:51.907936 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:37:51.904899 ignition[772]: Ignition finished successfully Nov 8 00:37:51.915941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:37:51.941610 ignition[781]: Ignition 2.19.0 Nov 8 00:37:51.941631 ignition[781]: Stage: kargs Nov 8 00:37:51.941842 ignition[781]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:51.941857 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:51.945141 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:37:51.942794 ignition[781]: kargs: kargs passed Nov 8 00:37:51.942842 ignition[781]: Ignition finished successfully Nov 8 00:37:51.952060 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:37:51.968375 ignition[788]: Ignition 2.19.0 Nov 8 00:37:51.968385 ignition[788]: Stage: disks Nov 8 00:37:51.968557 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:51.968571 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:51.970994 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:37:51.969474 ignition[788]: disks: disks passed Nov 8 00:37:51.994276 systemd-networkd[770]: eth0: Gained IPv6LL Nov 8 00:37:51.969516 ignition[788]: Ignition finished successfully Nov 8 00:37:51.998361 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:37:51.999464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:37:52.001467 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:37:52.003228 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:37:52.005390 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:37:52.016932 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:37:52.036816 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:37:52.041833 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:37:52.051834 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:37:52.148808 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:37:52.149576 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:37:52.152019 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:37:52.158827 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:37:52.162825 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:37:52.165544 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:37:52.165595 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:37:52.165618 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:37:52.190108 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Nov 8 00:37:52.190131 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:52.190150 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:52.190162 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:37:52.190173 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:37:52.190183 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:37:52.177185 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:37:52.193776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:37:52.201879 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:37:52.248768 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:37:52.255181 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:37:52.260589 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:37:52.268119 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:37:52.378970 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:37:52.384832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:37:52.387925 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:37:52.398483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:37:52.403313 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:52.430424 ignition[919]: INFO : Ignition 2.19.0 Nov 8 00:37:52.430424 ignition[919]: INFO : Stage: mount Nov 8 00:37:52.432692 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:52.432692 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:52.432692 ignition[919]: INFO : mount: mount passed Nov 8 00:37:52.432692 ignition[919]: INFO : Ignition finished successfully Nov 8 00:37:52.434149 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:37:52.436603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:37:52.462876 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:37:53.154867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:37:53.169753 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (931) Nov 8 00:37:53.174241 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:37:53.174274 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:37:53.177097 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:37:53.186116 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:37:53.186143 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:37:53.188740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:37:53.207518 ignition[947]: INFO : Ignition 2.19.0 Nov 8 00:37:53.207518 ignition[947]: INFO : Stage: files Nov 8 00:37:53.209622 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:53.209622 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:53.209622 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:37:53.209622 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:37:53.209622 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:37:53.215778 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:37:53.212899 unknown[947]: wrote ssh authorized keys file for user: core Nov 8 00:37:53.494714 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:37:53.632810 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:53.634896 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:37:54.208381 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:37:54.672876 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:37:54.672876 ignition[947]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:37:54.699409 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:37:54.699409 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:37:54.699409 ignition[947]: INFO : files: files passed Nov 8 00:37:54.699409 ignition[947]: INFO : Ignition finished successfully Nov 8 00:37:54.687301 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:37:54.707606 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:37:54.713859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:37:54.715171 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:37:54.734353 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:37:54.734353 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:37:54.715272 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:37:54.738208 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:37:54.729999 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:37:54.732091 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:37:54.742821 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:37:54.767900 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:37:54.768035 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:37:54.770293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:37:54.771905 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:37:54.773870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:37:54.783876 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:37:54.795898 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:37:54.800866 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:37:54.811412 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:37:54.812428 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:37:54.814451 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:37:54.816385 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:37:54.816483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:37:54.818830 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:37:54.820012 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:37:54.821937 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:37:54.823807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:37:54.825583 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:37:54.827587 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:37:54.829649 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:37:54.831763 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:37:54.833670 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:37:54.834575 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:37:54.837463 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:37:54.837625 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:37:54.840235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:37:54.841482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:37:54.843260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:37:54.843366 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:37:54.845104 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:37:54.845251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:37:54.847621 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:37:54.847814 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:37:54.848988 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:37:54.849122 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:37:54.858794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:37:54.863711 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:37:54.865556 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:37:54.866881 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:37:54.869110 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:37:54.870192 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:37:54.876859 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:37:54.876966 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:37:54.880927 ignition[1001]: INFO : Ignition 2.19.0 Nov 8 00:37:54.880927 ignition[1001]: INFO : Stage: umount Nov 8 00:37:54.880927 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:37:54.880927 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:37:54.885754 ignition[1001]: INFO : umount: umount passed Nov 8 00:37:54.885754 ignition[1001]: INFO : Ignition finished successfully Nov 8 00:37:54.889141 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:37:54.889357 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:37:54.891428 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:37:54.891485 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:37:54.892969 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:37:54.893020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:37:54.916590 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:37:54.916643 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:37:54.918392 systemd[1]: Stopped target network.target - Network. Nov 8 00:37:54.920061 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:37:54.920115 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:37:54.922078 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:37:54.923810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:37:54.926124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:37:54.927667 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:37:54.929651 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:37:54.931631 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:37:54.931677 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:37:54.933341 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:37:54.933386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:37:54.935280 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:37:54.935332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:37:54.937685 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:37:54.937761 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:37:54.939583 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:37:54.941516 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:37:54.944292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:37:54.944823 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:37:54.944964 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:37:54.945770 systemd-networkd[770]: eth0: DHCPv6 lease lost Nov 8 00:37:54.948542 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:37:54.948697 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:37:54.951002 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:37:54.951113 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:37:54.956249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:37:54.956306 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:37:54.958263 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:37:54.958317 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:37:54.965837 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:37:54.968795 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:37:54.968852 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:37:54.970925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:37:54.970975 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:37:54.973063 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:37:54.973114 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:37:54.975364 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:37:54.975412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:37:54.977160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:37:54.989626 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:37:54.989866 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:37:54.994260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:37:54.994384 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:37:54.995554 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:37:54.995597 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:37:54.997762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:37:54.997814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:37:55.000521 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:37:55.000570 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:37:55.002636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:37:55.002684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:37:55.012857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:37:55.016370 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:37:55.016431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:37:55.017417 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:37:55.017467 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:37:55.019420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:37:55.019470 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:37:55.023036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:37:55.023090 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:55.025601 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:37:55.025715 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:37:55.027280 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:37:55.027377 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:37:55.029915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:37:55.036883 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:37:55.044541 systemd[1]: Switching root. Nov 8 00:37:55.060599 systemd-journald[177]: Journal stopped Nov 8 00:37:56.372718 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Nov 8 00:37:56.373795 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:37:56.373809 kernel: SELinux: policy capability open_perms=1 Nov 8 00:37:56.373819 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:37:56.373832 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:37:56.373841 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:37:56.373850 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:37:56.373859 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:37:56.373868 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:37:56.373878 kernel: audit: type=1403 audit(1762562275.253:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:37:56.373888 systemd[1]: Successfully loaded SELinux policy in 63.812ms. Nov 8 00:37:56.373902 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.127ms. Nov 8 00:37:56.373913 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:37:56.373923 systemd[1]: Detected virtualization kvm. Nov 8 00:37:56.373933 systemd[1]: Detected architecture x86-64. Nov 8 00:37:56.373943 systemd[1]: Detected first boot. Nov 8 00:37:56.373955 systemd[1]: Initializing machine ID from random generator. Nov 8 00:37:56.373965 zram_generator::config[1065]: No configuration found. Nov 8 00:37:56.373976 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:37:56.373985 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:37:56.373995 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:37:56.374006 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:37:56.374016 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:37:56.374028 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:37:56.374038 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:37:56.374048 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:37:56.374058 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:37:56.374068 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:37:56.374078 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:37:56.374088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:37:56.374103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:37:56.374112 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:37:56.374122 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:37:56.374132 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:37:56.374142 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:37:56.374152 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:37:56.374162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:37:56.374171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:37:56.374184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:37:56.374194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:37:56.374207 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:37:56.374217 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:37:56.374227 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:37:56.374237 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:37:56.374247 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:37:56.374257 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:37:56.374270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:37:56.374280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:37:56.374290 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:37:56.374300 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:37:56.374310 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:37:56.374324 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:37:56.374334 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:37:56.374345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:56.374355 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:37:56.374365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:37:56.374375 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:37:56.374385 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:37:56.374396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:37:56.374408 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:37:56.374418 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:37:56.374428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:37:56.374438 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:37:56.374448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:37:56.374459 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:37:56.374469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:37:56.374479 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:37:56.374492 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:37:56.374502 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:37:56.374512 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:37:56.374523 kernel: fuse: init (API version 7.39) Nov 8 00:37:56.374532 kernel: loop: module loaded Nov 8 00:37:56.374543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:37:56.374553 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:37:56.374563 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:37:56.374576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:37:56.374586 kernel: ACPI: bus type drm_connector registered Nov 8 00:37:56.374596 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:56.374606 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:37:56.374616 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:37:56.374626 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:37:56.374636 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:37:56.374646 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:37:56.374659 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:37:56.374669 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:37:56.374679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:37:56.374710 systemd-journald[1166]: Collecting audit messages is disabled. Nov 8 00:37:56.375796 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:37:56.375816 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:37:56.375828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:37:56.375838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:37:56.375849 systemd-journald[1166]: Journal started Nov 8 00:37:56.375869 systemd-journald[1166]: Runtime Journal (/run/log/journal/fd9730b002714329b0aed8fd734d0641) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:37:56.381868 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:37:56.383462 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:37:56.384008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:37:56.385343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:37:56.385615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:37:56.386965 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:37:56.387224 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:37:56.388648 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:37:56.388927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:37:56.390551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:37:56.391957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:37:56.393308 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:37:56.410556 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:37:56.416819 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:37:56.424499 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:37:56.425570 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:37:56.430886 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:37:56.443878 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:37:56.446893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:37:56.452953 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:37:56.455981 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:37:56.460952 systemd-journald[1166]: Time spent on flushing to /var/log/journal/fd9730b002714329b0aed8fd734d0641 is 50.244ms for 961 entries. Nov 8 00:37:56.460952 systemd-journald[1166]: System Journal (/var/log/journal/fd9730b002714329b0aed8fd734d0641) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:37:56.521152 systemd-journald[1166]: Received client request to flush runtime journal. Nov 8 00:37:56.467853 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:37:56.472861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:37:56.485039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:37:56.491454 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:37:56.494770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:37:56.496065 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:37:56.506123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:37:56.515925 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:37:56.527333 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:37:56.531458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:37:56.547599 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:37:56.554890 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 8 00:37:56.555192 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 8 00:37:56.562150 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:37:56.571990 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:37:56.602709 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:37:56.618873 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:37:56.636283 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Nov 8 00:37:56.636554 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Nov 8 00:37:56.644047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:37:56.894771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:37:56.901901 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:37:56.928882 systemd-udevd[1232]: Using default interface naming scheme 'v255'. Nov 8 00:37:56.951067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:37:56.962949 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:37:56.982433 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:37:57.046805 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:37:57.065168 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:37:57.098782 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:37:57.127745 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:37:57.134002 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:37:57.134211 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:37:57.141129 systemd-networkd[1239]: lo: Link UP Nov 8 00:37:57.141412 systemd-networkd[1239]: lo: Gained carrier Nov 8 00:37:57.143185 systemd-networkd[1239]: Enumeration completed Nov 8 00:37:57.143377 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:37:57.143839 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:37:57.143844 systemd-networkd[1239]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:37:57.144980 systemd-networkd[1239]: eth0: Link UP Nov 8 00:37:57.144997 systemd-networkd[1239]: eth0: Gained carrier Nov 8 00:37:57.145009 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:37:57.149875 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:37:57.157817 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:37:57.175490 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:37:57.184480 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:37:57.192463 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:37:57.197784 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:37:57.199124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:37:57.215833 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1234) Nov 8 00:37:57.268681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:37:57.271164 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:37:57.278930 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:37:57.366627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:37:57.377486 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:37:57.406884 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:37:57.408746 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:37:57.414870 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:37:57.420410 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:37:57.450553 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:37:57.452167 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:37:57.453157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:37:57.453205 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:37:57.454114 systemd[1]: Reached target machines.target - Containers. Nov 8 00:37:57.456142 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:37:57.461858 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:37:57.465851 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:37:57.467025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:37:57.472891 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:37:57.475877 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:37:57.479975 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:37:57.483824 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:37:57.494600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:37:57.510775 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:37:57.513187 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:37:57.514960 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:37:57.545197 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:37:57.565769 kernel: loop1: detected capacity change from 0 to 8 Nov 8 00:37:57.584289 kernel: loop2: detected capacity change from 0 to 224512 Nov 8 00:37:57.623864 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:37:57.666489 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:37:57.685794 kernel: loop5: detected capacity change from 0 to 8 Nov 8 00:37:57.690437 kernel: loop6: detected capacity change from 0 to 224512 Nov 8 00:37:57.707746 kernel: loop7: detected capacity change from 0 to 142488 Nov 8 00:37:57.730076 (sd-merge)[1304]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 8 00:37:57.730780 (sd-merge)[1304]: Merged extensions into '/usr'. Nov 8 00:37:57.736691 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:37:57.736742 systemd[1]: Reloading... Nov 8 00:37:57.792789 systemd-networkd[1239]: eth0: DHCPv4 address 172.239.57.26/24, gateway 172.239.57.1 acquired from 23.213.15.222 Nov 8 00:37:57.836862 zram_generator::config[1332]: No configuration found. Nov 8 00:37:57.883238 ldconfig[1287]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:37:57.987887 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:37:58.043992 systemd[1]: Reloading finished in 306 ms. Nov 8 00:37:58.064586 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:37:58.066042 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:37:58.074971 systemd[1]: Starting ensure-sysext.service... Nov 8 00:37:58.076928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:37:58.087894 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:37:58.088027 systemd[1]: Reloading... Nov 8 00:37:58.102598 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:37:58.103050 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:37:58.104098 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:37:58.104474 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 8 00:37:58.104625 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 8 00:37:58.114618 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:37:58.117709 systemd-tmpfiles[1384]: Skipping /boot Nov 8 00:37:58.145839 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:37:58.146000 systemd-tmpfiles[1384]: Skipping /boot Nov 8 00:37:58.187007 zram_generator::config[1417]: No configuration found. Nov 8 00:37:58.201957 systemd-networkd[1239]: eth0: Gained IPv6LL Nov 8 00:37:58.326472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:37:58.396943 systemd[1]: Reloading finished in 308 ms. Nov 8 00:37:58.421056 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:37:58.431702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:37:58.448938 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:37:58.465067 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:37:58.470169 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:37:58.485016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:37:58.497997 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:37:58.504074 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:58.504276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:37:58.507195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:37:58.517939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:37:58.532393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:37:58.534866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:37:58.534968 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:58.546040 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:37:58.549012 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:58.549996 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:37:58.550280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:37:58.550431 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:58.553027 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:37:58.553273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:37:58.558420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:58.558971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:37:58.567241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:37:58.571779 augenrules[1495]: No rules Nov 8 00:37:58.581871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:37:58.587912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:37:58.588067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:37:58.593119 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:37:58.598061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:37:58.598322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:37:58.605084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:37:58.605387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:37:58.609769 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:37:58.610095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:37:58.613708 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:37:58.614202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:37:58.627097 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:37:58.629600 systemd[1]: Finished ensure-sysext.service. Nov 8 00:37:58.636779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:37:58.636880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:37:58.642936 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:37:58.647093 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:37:58.669509 systemd-resolved[1476]: Positive Trust Anchors: Nov 8 00:37:58.669531 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:37:58.669560 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:37:58.677437 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:37:58.679057 systemd-resolved[1476]: Defaulting to hostname 'linux'. Nov 8 00:37:58.689668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:37:58.691057 systemd[1]: Reached target network.target - Network. Nov 8 00:37:58.692885 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:37:58.694716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:37:58.697101 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:37:58.699695 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:37:58.742811 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:37:58.744125 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:37:58.746173 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:37:58.747528 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:37:58.748641 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:37:58.749712 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:37:58.749908 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:37:58.750921 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:37:58.752099 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:37:58.753312 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:37:58.754471 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:37:58.757321 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:37:58.762039 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:37:58.764549 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:37:58.768410 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:37:58.769604 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:37:58.770451 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:37:58.771518 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:37:58.771574 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:37:58.771606 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:37:58.774063 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:37:58.776872 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:37:58.780887 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:37:58.786841 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:37:58.796362 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:37:58.799786 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:37:58.808889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:37:58.812608 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:37:58.820003 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:37:58.829760 jq[1530]: false Nov 8 00:37:58.834144 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:37:58.847076 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:37:58.862330 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:37:58.869939 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:37:58.870610 dbus-daemon[1528]: [system] SELinux support is enabled Nov 8 00:37:58.873071 dbus-daemon[1528]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1239 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:37:58.874316 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:37:58.880865 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:37:58.885112 extend-filesystems[1531]: Found loop4 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found loop5 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found loop6 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found loop7 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda1 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda2 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda3 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found usr Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda4 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda6 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda7 Nov 8 00:37:58.885112 extend-filesystems[1531]: Found sda9 Nov 8 00:37:58.885112 extend-filesystems[1531]: Checking size of /dev/sda9 Nov 8 00:37:58.931835 extend-filesystems[1531]: Resized partition /dev/sda9 Nov 8 00:37:58.933183 coreos-metadata[1527]: Nov 08 00:37:58.897 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 8 00:37:58.894822 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:37:58.939105 update_engine[1554]: I20251108 00:37:58.930516 1554 main.cc:92] Flatcar Update Engine starting Nov 8 00:37:58.903176 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:37:58.914229 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:37:58.939575 jq[1562]: true Nov 8 00:37:58.914555 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:37:58.942990 extend-filesystems[1571]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:37:58.950936 update_engine[1554]: I20251108 00:37:58.942916 1554 update_check_scheduler.cc:74] Next update check in 8m44s Nov 8 00:37:58.918132 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:37:58.918443 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:37:58.936173 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:37:58.940582 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:37:58.940905 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:37:58.994065 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 8 00:37:58.982974 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:37:59.002538 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:37:59.007169 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:37:59.007214 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:37:59.026683 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:37:59.027977 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:37:59.592554 systemd-resolved[1476]: Clock change detected. Flushing caches. Nov 8 00:37:59.593236 systemd-timesyncd[1517]: Contacted time server 141.11.234.198:123 (0.flatcar.pool.ntp.org). Nov 8 00:37:59.593304 systemd-timesyncd[1517]: Initial clock synchronization to Sat 2025-11-08 00:37:59.592518 UTC. Nov 8 00:37:59.593670 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:37:59.593695 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:37:59.600888 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:37:59.603193 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1241) Nov 8 00:37:59.608872 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:37:59.619920 jq[1576]: true Nov 8 00:37:59.628930 tar[1573]: linux-amd64/LICENSE Nov 8 00:37:59.628930 tar[1573]: linux-amd64/helm Nov 8 00:37:59.632360 systemd-logind[1553]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:37:59.674395 coreos-metadata[1527]: Nov 08 00:37:59.656 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 8 00:37:59.634732 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:37:59.637890 systemd-logind[1553]: New seat seat0. Nov 8 00:37:59.678448 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:37:59.844427 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:37:59.856729 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:37:59.867989 coreos-metadata[1527]: Nov 08 00:37:59.867 INFO Fetch successful Nov 8 00:37:59.868272 coreos-metadata[1527]: Nov 08 00:37:59.868 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 8 00:37:59.875353 systemd[1]: Starting sshkeys.service... Nov 8 00:37:59.905606 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:37:59.913120 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:37:59.914686 dbus-daemon[1528]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1590 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:37:59.914980 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:37:59.917475 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:37:59.928037 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:37:59.941151 polkitd[1621]: Started polkitd version 121 Nov 8 00:37:59.945228 polkitd[1621]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:37:59.945284 polkitd[1621]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:37:59.946000 polkitd[1621]: Finished loading, compiling and executing 2 rules Nov 8 00:37:59.946424 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:37:59.946537 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:37:59.948476 polkitd[1621]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:37:59.951963 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:37:59.989840 coreos-metadata[1620]: Nov 08 00:37:59.989 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 8 00:37:59.998199 systemd-resolved[1476]: System hostname changed to '172-239-57-26'. Nov 8 00:37:59.998361 systemd-hostnamed[1590]: Hostname set to <172-239-57-26> (transient) Nov 8 00:38:00.017854 containerd[1577]: time="2025-11-08T00:38:00.016044875Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:38:00.040158 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 8 00:38:00.049962 containerd[1577]: time="2025-11-08T00:38:00.049938216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:00.052195 containerd[1577]: time="2025-11-08T00:38:00.052167679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:00.052252 containerd[1577]: time="2025-11-08T00:38:00.052239429Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:38:00.052455 containerd[1577]: time="2025-11-08T00:38:00.052286289Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:38:00.053480 containerd[1577]: time="2025-11-08T00:38:00.053462591Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:38:00.053545 containerd[1577]: time="2025-11-08T00:38:00.053533311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:00.053707 containerd[1577]: time="2025-11-08T00:38:00.053685941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:00.053982 containerd[1577]: time="2025-11-08T00:38:00.053967232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054274742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054295132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054308272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054318032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054416682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054648973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054796643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054809603Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054902573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:38:00.055175 containerd[1577]: time="2025-11-08T00:38:00.054986973Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:38:00.056688 extend-filesystems[1571]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:38:00.056688 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 8 00:38:00.056688 extend-filesystems[1571]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 8 00:38:00.075421 extend-filesystems[1531]: Resized filesystem in /dev/sda9 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.060602262Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.060658812Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.060679452Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.060742882Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.060762092Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.060881172Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061271413Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061386103Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061401483Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061418713Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061431933Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061443933Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061454893Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.085228 containerd[1577]: time="2025-11-08T00:38:00.061468253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.059629 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061482343Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061494093Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061505543Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061515543Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061533863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061546593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061557473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061569163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061589003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061601783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061612263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061623573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061635013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.085761 containerd[1577]: time="2025-11-08T00:38:00.061648433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.060244 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061658973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061669673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061680523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061694683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061712783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061723033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061733153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061782194Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061796124Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061805984Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061815924Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061824834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061835224Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:38:00.086038 containerd[1577]: time="2025-11-08T00:38:00.061844564Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:38:00.074037 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:38:00.088374 containerd[1577]: time="2025-11-08T00:38:00.061853524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.062061724Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.062114034Z" level=info msg="Connect containerd service" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.063421046Z" level=info msg="using legacy CRI server" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.063439906Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.063511026Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.064009467Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065608519Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065666359Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065755259Z" level=info msg="Start subscribing containerd event" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065793250Z" level=info msg="Start recovering state" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065847380Z" level=info msg="Start event monitor" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065868400Z" level=info msg="Start snapshots syncer" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065876810Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065884090Z" level=info msg="Start streaming server" Nov 8 00:38:00.088403 containerd[1577]: time="2025-11-08T00:38:00.065937100Z" level=info msg="containerd successfully booted in 0.051968s" Nov 8 00:38:00.097152 coreos-metadata[1620]: Nov 08 00:38:00.095 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 8 00:38:00.133165 coreos-metadata[1527]: Nov 08 00:38:00.132 INFO Fetch successful Nov 8 00:38:00.234296 coreos-metadata[1620]: Nov 08 00:38:00.234 INFO Fetch successful Nov 8 00:38:00.278171 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:38:00.282808 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:38:00.298655 systemd[1]: Finished sshkeys.service. Nov 8 00:38:00.313623 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:38:00.316719 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:38:00.406174 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:38:00.447298 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:38:00.458427 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:38:00.471813 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:38:00.474236 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:38:00.488057 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:38:00.503397 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:38:00.514605 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:38:00.519567 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:38:00.520702 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:38:00.638775 tar[1573]: linux-amd64/README.md Nov 8 00:38:00.658876 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:38:01.030312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:01.030599 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:38:01.032342 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:38:01.033720 systemd[1]: Startup finished in 9.617s (kernel) + 5.278s (userspace) = 14.895s. Nov 8 00:38:01.558936 kubelet[1710]: E1108 00:38:01.558855 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:38:01.562316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:38:01.562593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:38:02.362437 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:38:02.368553 systemd[1]: Started sshd@0-172.239.57.26:22-147.75.109.163:44566.service - OpenSSH per-connection server daemon (147.75.109.163:44566). Nov 8 00:38:02.711160 sshd[1722]: Accepted publickey for core from 147.75.109.163 port 44566 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:02.713345 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:02.724342 systemd-logind[1553]: New session 1 of user core. Nov 8 00:38:02.725426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:38:02.731355 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:38:02.746425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:38:02.754596 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:38:02.760116 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:38:02.858700 systemd[1728]: Queued start job for default target default.target. Nov 8 00:38:02.859096 systemd[1728]: Created slice app.slice - User Application Slice. Nov 8 00:38:02.859119 systemd[1728]: Reached target paths.target - Paths. Nov 8 00:38:02.859158 systemd[1728]: Reached target timers.target - Timers. Nov 8 00:38:02.868218 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:38:02.875309 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:38:02.875371 systemd[1728]: Reached target sockets.target - Sockets. Nov 8 00:38:02.875385 systemd[1728]: Reached target basic.target - Basic System. Nov 8 00:38:02.875426 systemd[1728]: Reached target default.target - Main User Target. Nov 8 00:38:02.875462 systemd[1728]: Startup finished in 109ms. Nov 8 00:38:02.876439 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:38:02.886419 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:38:03.140340 systemd[1]: Started sshd@1-172.239.57.26:22-147.75.109.163:44580.service - OpenSSH per-connection server daemon (147.75.109.163:44580). Nov 8 00:38:03.458940 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 44580 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:03.460984 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:03.465918 systemd-logind[1553]: New session 2 of user core. Nov 8 00:38:03.475397 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:38:03.704632 sshd[1740]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:03.708528 systemd[1]: sshd@1-172.239.57.26:22-147.75.109.163:44580.service: Deactivated successfully. Nov 8 00:38:03.711725 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:38:03.712410 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:38:03.713346 systemd-logind[1553]: Removed session 2. Nov 8 00:38:03.767337 systemd[1]: Started sshd@2-172.239.57.26:22-147.75.109.163:44586.service - OpenSSH per-connection server daemon (147.75.109.163:44586). Nov 8 00:38:04.111910 sshd[1748]: Accepted publickey for core from 147.75.109.163 port 44586 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:04.113856 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:04.118196 systemd-logind[1553]: New session 3 of user core. Nov 8 00:38:04.128396 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:38:04.371100 sshd[1748]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:04.373976 systemd[1]: sshd@2-172.239.57.26:22-147.75.109.163:44586.service: Deactivated successfully. Nov 8 00:38:04.377093 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:38:04.379226 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:38:04.380316 systemd-logind[1553]: Removed session 3. Nov 8 00:38:04.423324 systemd[1]: Started sshd@3-172.239.57.26:22-147.75.109.163:44600.service - OpenSSH per-connection server daemon (147.75.109.163:44600). Nov 8 00:38:04.742576 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 44600 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:04.744171 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:04.748856 systemd-logind[1553]: New session 4 of user core. Nov 8 00:38:04.757417 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:38:04.989069 sshd[1756]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:04.992199 systemd[1]: sshd@3-172.239.57.26:22-147.75.109.163:44600.service: Deactivated successfully. Nov 8 00:38:04.996346 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:38:04.997006 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:38:04.998326 systemd-logind[1553]: Removed session 4. Nov 8 00:38:05.048329 systemd[1]: Started sshd@4-172.239.57.26:22-147.75.109.163:44614.service - OpenSSH per-connection server daemon (147.75.109.163:44614). Nov 8 00:38:05.383782 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 44614 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:05.385180 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:05.390301 systemd-logind[1553]: New session 5 of user core. Nov 8 00:38:05.399457 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:38:05.594661 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:38:05.595039 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:38:05.612285 sudo[1768]: pam_unix(sudo:session): session closed for user root Nov 8 00:38:05.665875 sshd[1764]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:05.669241 systemd[1]: sshd@4-172.239.57.26:22-147.75.109.163:44614.service: Deactivated successfully. Nov 8 00:38:05.672772 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:38:05.673329 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:38:05.675361 systemd-logind[1553]: Removed session 5. Nov 8 00:38:05.725335 systemd[1]: Started sshd@5-172.239.57.26:22-147.75.109.163:44628.service - OpenSSH per-connection server daemon (147.75.109.163:44628). Nov 8 00:38:06.081719 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 44628 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:06.083468 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:06.088261 systemd-logind[1553]: New session 6 of user core. Nov 8 00:38:06.094400 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:38:06.292323 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:38:06.292684 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:38:06.296566 sudo[1778]: pam_unix(sudo:session): session closed for user root Nov 8 00:38:06.302476 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:38:06.302832 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:38:06.316366 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:38:06.318799 auditctl[1781]: No rules Nov 8 00:38:06.319295 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:38:06.319564 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:38:06.323416 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:38:06.351453 augenrules[1800]: No rules Nov 8 00:38:06.353261 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:38:06.356334 sudo[1777]: pam_unix(sudo:session): session closed for user root Nov 8 00:38:06.411850 sshd[1773]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:06.414838 systemd[1]: sshd@5-172.239.57.26:22-147.75.109.163:44628.service: Deactivated successfully. Nov 8 00:38:06.419173 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:38:06.419907 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:38:06.420899 systemd-logind[1553]: Removed session 6. Nov 8 00:38:06.466480 systemd[1]: Started sshd@6-172.239.57.26:22-147.75.109.163:44632.service - OpenSSH per-connection server daemon (147.75.109.163:44632). Nov 8 00:38:06.799820 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 44632 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:06.801449 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:06.806219 systemd-logind[1553]: New session 7 of user core. Nov 8 00:38:06.813408 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:38:07.002258 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:38:07.002626 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:38:07.262382 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:38:07.265048 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:38:07.524586 dockerd[1828]: time="2025-11-08T00:38:07.523315075Z" level=info msg="Starting up" Nov 8 00:38:07.657007 dockerd[1828]: time="2025-11-08T00:38:07.656386795Z" level=info msg="Loading containers: start." Nov 8 00:38:07.780160 kernel: Initializing XFRM netlink socket Nov 8 00:38:07.856464 systemd-networkd[1239]: docker0: Link UP Nov 8 00:38:07.867715 dockerd[1828]: time="2025-11-08T00:38:07.867672062Z" level=info msg="Loading containers: done." Nov 8 00:38:07.880605 dockerd[1828]: time="2025-11-08T00:38:07.880568781Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:38:07.880742 dockerd[1828]: time="2025-11-08T00:38:07.880641411Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:38:07.880777 dockerd[1828]: time="2025-11-08T00:38:07.880751671Z" level=info msg="Daemon has completed initialization" Nov 8 00:38:07.883255 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck213901256-merged.mount: Deactivated successfully. Nov 8 00:38:07.908581 dockerd[1828]: time="2025-11-08T00:38:07.908263002Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:38:07.908341 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:38:08.739288 containerd[1577]: time="2025-11-08T00:38:08.739186399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:38:09.447985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879050205.mount: Deactivated successfully. Nov 8 00:38:10.429173 containerd[1577]: time="2025-11-08T00:38:10.428774913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:10.430053 containerd[1577]: time="2025-11-08T00:38:10.429992385Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:38:10.430459 containerd[1577]: time="2025-11-08T00:38:10.430433045Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:10.433431 containerd[1577]: time="2025-11-08T00:38:10.433389460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:10.435306 containerd[1577]: time="2025-11-08T00:38:10.434571292Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.695344893s" Nov 8 00:38:10.435306 containerd[1577]: time="2025-11-08T00:38:10.434607602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:38:10.435306 containerd[1577]: time="2025-11-08T00:38:10.435295203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:38:11.669996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:38:11.675544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:38:11.864319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:11.874555 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:38:11.930722 kubelet[2042]: E1108 00:38:11.929867 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:38:11.936579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:38:11.937191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:38:12.050441 containerd[1577]: time="2025-11-08T00:38:12.050332685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:12.051753 containerd[1577]: time="2025-11-08T00:38:12.051526227Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:38:12.052774 containerd[1577]: time="2025-11-08T00:38:12.052370158Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:12.056096 containerd[1577]: time="2025-11-08T00:38:12.056064704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:12.057195 containerd[1577]: time="2025-11-08T00:38:12.057119655Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.621800072s" Nov 8 00:38:12.057195 containerd[1577]: time="2025-11-08T00:38:12.057184845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:38:12.057818 containerd[1577]: time="2025-11-08T00:38:12.057779086Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:38:13.414014 containerd[1577]: time="2025-11-08T00:38:13.413934950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:13.415527 containerd[1577]: time="2025-11-08T00:38:13.415321592Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:38:13.416771 containerd[1577]: time="2025-11-08T00:38:13.416103334Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:13.422895 containerd[1577]: time="2025-11-08T00:38:13.422856064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:13.424240 containerd[1577]: time="2025-11-08T00:38:13.424206016Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.36639348s" Nov 8 00:38:13.424325 containerd[1577]: time="2025-11-08T00:38:13.424308556Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:38:13.425798 containerd[1577]: time="2025-11-08T00:38:13.425773638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:38:14.723677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844662130.mount: Deactivated successfully. Nov 8 00:38:15.123293 containerd[1577]: time="2025-11-08T00:38:15.123068384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:15.124262 containerd[1577]: time="2025-11-08T00:38:15.123845825Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:38:15.124626 containerd[1577]: time="2025-11-08T00:38:15.124598276Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:15.127159 containerd[1577]: time="2025-11-08T00:38:15.126305869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:15.128732 containerd[1577]: time="2025-11-08T00:38:15.128696642Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.702796734s" Nov 8 00:38:15.128771 containerd[1577]: time="2025-11-08T00:38:15.128736692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:38:15.129430 containerd[1577]: time="2025-11-08T00:38:15.129259153Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:38:15.815648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583366823.mount: Deactivated successfully. Nov 8 00:38:16.518420 containerd[1577]: time="2025-11-08T00:38:16.518372637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:16.520092 containerd[1577]: time="2025-11-08T00:38:16.519908579Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:38:16.521153 containerd[1577]: time="2025-11-08T00:38:16.520640640Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:16.523298 containerd[1577]: time="2025-11-08T00:38:16.523261684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:16.524463 containerd[1577]: time="2025-11-08T00:38:16.524359556Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.394837992s" Nov 8 00:38:16.524463 containerd[1577]: time="2025-11-08T00:38:16.524386356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:38:16.525452 containerd[1577]: time="2025-11-08T00:38:16.525429657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:38:17.152383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541382054.mount: Deactivated successfully. Nov 8 00:38:17.155438 containerd[1577]: time="2025-11-08T00:38:17.155388162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:17.156029 containerd[1577]: time="2025-11-08T00:38:17.155996173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:38:17.156530 containerd[1577]: time="2025-11-08T00:38:17.156491744Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:17.158445 containerd[1577]: time="2025-11-08T00:38:17.158419007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:17.159846 containerd[1577]: time="2025-11-08T00:38:17.159233088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.776411ms" Nov 8 00:38:17.159846 containerd[1577]: time="2025-11-08T00:38:17.159258478Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:38:17.159846 containerd[1577]: time="2025-11-08T00:38:17.159722299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:38:17.840689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815895664.mount: Deactivated successfully. Nov 8 00:38:20.013493 containerd[1577]: time="2025-11-08T00:38:20.013442779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:20.015043 containerd[1577]: time="2025-11-08T00:38:20.014988711Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:38:20.015417 containerd[1577]: time="2025-11-08T00:38:20.015366042Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:20.018352 containerd[1577]: time="2025-11-08T00:38:20.018316066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:20.019712 containerd[1577]: time="2025-11-08T00:38:20.019600348Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.859857369s" Nov 8 00:38:20.019712 containerd[1577]: time="2025-11-08T00:38:20.019631088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:38:21.832660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:21.841304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:38:21.871118 systemd[1]: Reloading requested from client PID 2200 ('systemctl') (unit session-7.scope)... Nov 8 00:38:21.871176 systemd[1]: Reloading... Nov 8 00:38:22.004258 zram_generator::config[2243]: No configuration found. Nov 8 00:38:22.116907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:38:22.183955 systemd[1]: Reloading finished in 312 ms. Nov 8 00:38:22.233323 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:38:22.233526 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:38:22.233981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:22.244350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:38:22.389164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:22.393219 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:38:22.435911 kubelet[2306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:38:22.438154 kubelet[2306]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:38:22.438154 kubelet[2306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:38:22.438154 kubelet[2306]: I1108 00:38:22.436368 2306 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:38:22.629002 kubelet[2306]: I1108 00:38:22.628961 2306 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:38:22.629002 kubelet[2306]: I1108 00:38:22.628992 2306 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:38:22.629362 kubelet[2306]: I1108 00:38:22.629338 2306 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:38:22.656637 kubelet[2306]: E1108 00:38:22.656554 2306 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.239.57.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:22.657838 kubelet[2306]: I1108 00:38:22.657822 2306 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:38:22.663625 kubelet[2306]: E1108 00:38:22.663600 2306 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:38:22.663718 kubelet[2306]: I1108 00:38:22.663689 2306 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:38:22.668349 kubelet[2306]: I1108 00:38:22.668303 2306 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:38:22.672457 kubelet[2306]: I1108 00:38:22.672403 2306 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:38:22.672595 kubelet[2306]: I1108 00:38:22.672441 2306 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-57-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:38:22.672697 kubelet[2306]: I1108 00:38:22.672601 2306 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:38:22.672697 kubelet[2306]: I1108 00:38:22.672613 2306 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:38:22.672755 kubelet[2306]: I1108 00:38:22.672723 2306 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:38:22.676285 kubelet[2306]: I1108 00:38:22.676179 2306 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:38:22.678468 kubelet[2306]: I1108 00:38:22.678118 2306 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:38:22.678468 kubelet[2306]: I1108 00:38:22.678167 2306 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:38:22.678468 kubelet[2306]: I1108 00:38:22.678182 2306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:38:22.683045 kubelet[2306]: W1108 00:38:22.683017 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.239.57.26:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-57-26&limit=500&resourceVersion=0": dial tcp 172.239.57.26:6443: connect: connection refused Nov 8 00:38:22.683347 kubelet[2306]: E1108 00:38:22.683328 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.239.57.26:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-57-26&limit=500&resourceVersion=0\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:22.683482 kubelet[2306]: I1108 00:38:22.683454 2306 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:38:22.683811 kubelet[2306]: I1108 00:38:22.683785 2306 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:38:22.685202 kubelet[2306]: W1108 00:38:22.684394 2306 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:38:22.686470 kubelet[2306]: I1108 00:38:22.686262 2306 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:38:22.686470 kubelet[2306]: I1108 00:38:22.686290 2306 server.go:1287] "Started kubelet" Nov 8 00:38:22.691050 kubelet[2306]: I1108 00:38:22.690924 2306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:38:22.693319 kubelet[2306]: W1108 00:38:22.693283 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.239.57.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.239.57.26:6443: connect: connection refused Nov 8 00:38:22.693379 kubelet[2306]: E1108 00:38:22.693340 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.239.57.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:22.696720 kubelet[2306]: E1108 00:38:22.695424 2306 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.57.26:6443/api/v1/namespaces/default/events\": dial tcp 172.239.57.26:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-57-26.1875e1136a1c2548 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-57-26,UID:172-239-57-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-57-26,},FirstTimestamp:2025-11-08 00:38:22.686274888 +0000 UTC m=+0.285762340,LastTimestamp:2025-11-08 00:38:22.686274888 +0000 UTC m=+0.285762340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-57-26,}" Nov 8 00:38:22.697507 kubelet[2306]: I1108 00:38:22.697460 2306 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:38:22.699646 kubelet[2306]: I1108 00:38:22.699486 2306 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:38:22.699646 kubelet[2306]: E1108 00:38:22.699637 2306 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-57-26\" not found" Nov 8 00:38:22.700195 kubelet[2306]: I1108 00:38:22.700168 2306 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:38:22.700240 kubelet[2306]: I1108 00:38:22.700223 2306 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:38:22.702166 kubelet[2306]: W1108 00:38:22.701425 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.239.57.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.239.57.26:6443: connect: connection refused Nov 8 00:38:22.702166 kubelet[2306]: E1108 00:38:22.701461 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.239.57.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:22.702166 kubelet[2306]: E1108 00:38:22.701535 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-26?timeout=10s\": dial tcp 172.239.57.26:6443: connect: connection refused" interval="200ms" Nov 8 00:38:22.702166 kubelet[2306]: I1108 00:38:22.701703 2306 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:38:22.702166 kubelet[2306]: I1108 00:38:22.701767 2306 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:38:22.702166 kubelet[2306]: I1108 00:38:22.702118 2306 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:38:22.702701 kubelet[2306]: I1108 00:38:22.702665 2306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:38:22.702939 kubelet[2306]: I1108 00:38:22.702923 2306 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:38:22.703111 kubelet[2306]: I1108 00:38:22.703082 2306 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:38:22.704285 kubelet[2306]: E1108 00:38:22.704251 2306 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:38:22.704718 kubelet[2306]: I1108 00:38:22.704671 2306 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:38:22.722219 kubelet[2306]: I1108 00:38:22.722196 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:38:22.729688 kubelet[2306]: I1108 00:38:22.729674 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:38:22.729844 kubelet[2306]: I1108 00:38:22.729776 2306 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:38:22.729844 kubelet[2306]: I1108 00:38:22.729800 2306 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:38:22.729844 kubelet[2306]: I1108 00:38:22.729808 2306 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:38:22.732170 kubelet[2306]: E1108 00:38:22.732114 2306 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:38:22.733814 kubelet[2306]: W1108 00:38:22.733783 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.239.57.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.239.57.26:6443: connect: connection refused Nov 8 00:38:22.733900 kubelet[2306]: E1108 00:38:22.733884 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.239.57.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:22.739974 kubelet[2306]: I1108 00:38:22.739960 2306 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:38:22.740075 kubelet[2306]: I1108 00:38:22.740062 2306 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:38:22.740153 kubelet[2306]: I1108 00:38:22.740120 2306 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:38:22.742812 kubelet[2306]: I1108 00:38:22.742556 2306 policy_none.go:49] "None policy: Start" Nov 8 00:38:22.742812 kubelet[2306]: I1108 00:38:22.742603 2306 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:38:22.742812 kubelet[2306]: I1108 00:38:22.742617 2306 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:38:22.751579 kubelet[2306]: I1108 00:38:22.751563 2306 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:38:22.752592 kubelet[2306]: I1108 00:38:22.752574 2306 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:38:22.752628 kubelet[2306]: I1108 00:38:22.752593 2306 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:38:22.752867 kubelet[2306]: I1108 00:38:22.752830 2306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:38:22.756011 kubelet[2306]: E1108 00:38:22.755966 2306 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:38:22.756011 kubelet[2306]: E1108 00:38:22.756009 2306 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-57-26\" not found" Nov 8 00:38:22.843166 kubelet[2306]: E1108 00:38:22.842852 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-26\" not found" node="172-239-57-26" Nov 8 00:38:22.844618 kubelet[2306]: E1108 00:38:22.844595 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-26\" not found" node="172-239-57-26" Nov 8 00:38:22.849543 kubelet[2306]: E1108 00:38:22.849525 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-26\" not found" node="172-239-57-26" Nov 8 00:38:22.854750 kubelet[2306]: I1108 00:38:22.854721 2306 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-26" Nov 8 00:38:22.855310 kubelet[2306]: E1108 00:38:22.855284 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.26:6443/api/v1/nodes\": dial tcp 172.239.57.26:6443: connect: connection refused" node="172-239-57-26" Nov 8 00:38:22.901580 kubelet[2306]: I1108 00:38:22.901536 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-k8s-certs\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:22.901580 kubelet[2306]: I1108 00:38:22.901570 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-kubeconfig\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:22.901651 kubelet[2306]: I1108 00:38:22.901592 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58b6c6d7a711551a4f150de0e2b1adc9-kubeconfig\") pod \"kube-scheduler-172-239-57-26\" (UID: \"58b6c6d7a711551a4f150de0e2b1adc9\") " pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:22.901651 kubelet[2306]: I1108 00:38:22.901607 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11a027eb797f5527392a4f27109df3bd-ca-certs\") pod \"kube-apiserver-172-239-57-26\" (UID: \"11a027eb797f5527392a4f27109df3bd\") " pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:22.901651 kubelet[2306]: I1108 00:38:22.901622 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11a027eb797f5527392a4f27109df3bd-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-57-26\" (UID: \"11a027eb797f5527392a4f27109df3bd\") " pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:22.901651 kubelet[2306]: I1108 00:38:22.901637 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:22.901651 kubelet[2306]: I1108 00:38:22.901650 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11a027eb797f5527392a4f27109df3bd-k8s-certs\") pod \"kube-apiserver-172-239-57-26\" (UID: \"11a027eb797f5527392a4f27109df3bd\") " pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:22.901766 kubelet[2306]: I1108 00:38:22.901663 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-ca-certs\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:22.901766 kubelet[2306]: I1108 00:38:22.901677 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-flexvolume-dir\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:22.902214 kubelet[2306]: E1108 00:38:22.902124 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-26?timeout=10s\": dial tcp 172.239.57.26:6443: connect: connection refused" interval="400ms" Nov 8 00:38:23.059306 kubelet[2306]: I1108 00:38:23.057333 2306 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-26" Nov 8 00:38:23.059540 kubelet[2306]: E1108 00:38:23.059512 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.26:6443/api/v1/nodes\": dial tcp 172.239.57.26:6443: connect: connection refused" node="172-239-57-26" Nov 8 00:38:23.144340 kubelet[2306]: E1108 00:38:23.144314 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:23.144918 containerd[1577]: time="2025-11-08T00:38:23.144851156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-57-26,Uid:58b6c6d7a711551a4f150de0e2b1adc9,Namespace:kube-system,Attempt:0,}" Nov 8 00:38:23.145367 kubelet[2306]: E1108 00:38:23.145232 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:23.145554 containerd[1577]: time="2025-11-08T00:38:23.145523687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-57-26,Uid:11a027eb797f5527392a4f27109df3bd,Namespace:kube-system,Attempt:0,}" Nov 8 00:38:23.149937 kubelet[2306]: E1108 00:38:23.149907 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:23.150375 containerd[1577]: time="2025-11-08T00:38:23.150211894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-57-26,Uid:4371d2c097ce5192436053fe25a1855c,Namespace:kube-system,Attempt:0,}" Nov 8 00:38:23.303171 kubelet[2306]: E1108 00:38:23.303103 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-26?timeout=10s\": dial tcp 172.239.57.26:6443: connect: connection refused" interval="800ms" Nov 8 00:38:23.461404 kubelet[2306]: I1108 00:38:23.461378 2306 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-26" Nov 8 00:38:23.462007 kubelet[2306]: E1108 00:38:23.461668 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.26:6443/api/v1/nodes\": dial tcp 172.239.57.26:6443: connect: connection refused" node="172-239-57-26" Nov 8 00:38:23.660680 kubelet[2306]: W1108 00:38:23.660569 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.239.57.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.239.57.26:6443: connect: connection refused Nov 8 00:38:23.660680 kubelet[2306]: E1108 00:38:23.660655 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.239.57.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:23.795601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679785433.mount: Deactivated successfully. Nov 8 00:38:23.800500 containerd[1577]: time="2025-11-08T00:38:23.800472629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:38:23.801488 containerd[1577]: time="2025-11-08T00:38:23.801264860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:38:23.802009 containerd[1577]: time="2025-11-08T00:38:23.801979121Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:38:23.803474 containerd[1577]: time="2025-11-08T00:38:23.803419443Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:38:23.804276 containerd[1577]: time="2025-11-08T00:38:23.804220855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:38:23.804835 containerd[1577]: time="2025-11-08T00:38:23.804753095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:38:23.805655 containerd[1577]: time="2025-11-08T00:38:23.805607807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:38:23.806969 containerd[1577]: time="2025-11-08T00:38:23.806937359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:38:23.810047 containerd[1577]: time="2025-11-08T00:38:23.810019803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 659.765739ms" Nov 8 00:38:23.811274 containerd[1577]: time="2025-11-08T00:38:23.811246435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 666.313459ms" Nov 8 00:38:23.811419 containerd[1577]: time="2025-11-08T00:38:23.811346915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.630428ms" Nov 8 00:38:23.856408 kubelet[2306]: W1108 00:38:23.856310 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.239.57.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.239.57.26:6443: connect: connection refused Nov 8 00:38:23.856408 kubelet[2306]: E1108 00:38:23.856372 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.239.57.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:23.919717 containerd[1577]: time="2025-11-08T00:38:23.919634328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:23.920150 containerd[1577]: time="2025-11-08T00:38:23.919997858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:23.923300 containerd[1577]: time="2025-11-08T00:38:23.922499712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:23.923300 containerd[1577]: time="2025-11-08T00:38:23.922621402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:23.926980 containerd[1577]: time="2025-11-08T00:38:23.926926219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:23.927166 containerd[1577]: time="2025-11-08T00:38:23.927080509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:23.927289 containerd[1577]: time="2025-11-08T00:38:23.927259089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:23.927597 containerd[1577]: time="2025-11-08T00:38:23.927535010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:23.933635 containerd[1577]: time="2025-11-08T00:38:23.933499899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:23.935342 containerd[1577]: time="2025-11-08T00:38:23.935180971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:23.935342 containerd[1577]: time="2025-11-08T00:38:23.935202681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:23.935342 containerd[1577]: time="2025-11-08T00:38:23.935286361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:24.025123 containerd[1577]: time="2025-11-08T00:38:24.024808145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-57-26,Uid:58b6c6d7a711551a4f150de0e2b1adc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7cf94fd8d75af16972482248ee7ed65677beaf2f7d9407a0f48f6ed34378584\"" Nov 8 00:38:24.026240 kubelet[2306]: E1108 00:38:24.026221 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:24.028651 containerd[1577]: time="2025-11-08T00:38:24.028628381Z" level=info msg="CreateContainer within sandbox \"d7cf94fd8d75af16972482248ee7ed65677beaf2f7d9407a0f48f6ed34378584\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:38:24.039984 containerd[1577]: time="2025-11-08T00:38:24.039941128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-57-26,Uid:11a027eb797f5527392a4f27109df3bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fd7b1d09481d27ea07e7d5022c98fc37bbcc9f9ec0276e26d1eb350b47f8887\"" Nov 8 00:38:24.044084 kubelet[2306]: E1108 00:38:24.043982 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:24.047181 containerd[1577]: time="2025-11-08T00:38:24.046652268Z" level=info msg="CreateContainer within sandbox \"5fd7b1d09481d27ea07e7d5022c98fc37bbcc9f9ec0276e26d1eb350b47f8887\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:38:24.048327 containerd[1577]: time="2025-11-08T00:38:24.048221051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-57-26,Uid:4371d2c097ce5192436053fe25a1855c,Namespace:kube-system,Attempt:0,} returns sandbox id \"93f8b1c97e0c45e51009c371ab58a1e88cc10b4b11ae8dd8a535ab2a74b0fe21\"" Nov 8 00:38:24.049324 kubelet[2306]: E1108 00:38:24.049214 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:24.051491 containerd[1577]: time="2025-11-08T00:38:24.051453255Z" level=info msg="CreateContainer within sandbox \"93f8b1c97e0c45e51009c371ab58a1e88cc10b4b11ae8dd8a535ab2a74b0fe21\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:38:24.055951 containerd[1577]: time="2025-11-08T00:38:24.055930452Z" level=info msg="CreateContainer within sandbox \"d7cf94fd8d75af16972482248ee7ed65677beaf2f7d9407a0f48f6ed34378584\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b17b486e0eb9e28fdcd91a7fbaa86bc0661e5b351536c354f63a93888b94c0dd\"" Nov 8 00:38:24.056530 containerd[1577]: time="2025-11-08T00:38:24.056509143Z" level=info msg="StartContainer for \"b17b486e0eb9e28fdcd91a7fbaa86bc0661e5b351536c354f63a93888b94c0dd\"" Nov 8 00:38:24.063740 containerd[1577]: time="2025-11-08T00:38:24.063718374Z" level=info msg="CreateContainer within sandbox \"5fd7b1d09481d27ea07e7d5022c98fc37bbcc9f9ec0276e26d1eb350b47f8887\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08847011f586f9fd17cb3fd890ce2b110cdbb9f7a0d04228b2ab77353afae252\"" Nov 8 00:38:24.065179 containerd[1577]: time="2025-11-08T00:38:24.065157406Z" level=info msg="StartContainer for \"08847011f586f9fd17cb3fd890ce2b110cdbb9f7a0d04228b2ab77353afae252\"" Nov 8 00:38:24.071455 containerd[1577]: time="2025-11-08T00:38:24.071432585Z" level=info msg="CreateContainer within sandbox \"93f8b1c97e0c45e51009c371ab58a1e88cc10b4b11ae8dd8a535ab2a74b0fe21\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"adc0eaf96f26717e26bc2713d591568d7b1bca9453646b5dddc96e210fb48eb4\"" Nov 8 00:38:24.072163 containerd[1577]: time="2025-11-08T00:38:24.072056696Z" level=info msg="StartContainer for \"adc0eaf96f26717e26bc2713d591568d7b1bca9453646b5dddc96e210fb48eb4\"" Nov 8 00:38:24.103517 kubelet[2306]: E1108 00:38:24.103469 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-26?timeout=10s\": dial tcp 172.239.57.26:6443: connect: connection refused" interval="1.6s" Nov 8 00:38:24.127527 kubelet[2306]: W1108 00:38:24.127392 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.239.57.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.239.57.26:6443: connect: connection refused Nov 8 00:38:24.127926 kubelet[2306]: E1108 00:38:24.127820 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.239.57.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.57.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:38:24.177915 containerd[1577]: time="2025-11-08T00:38:24.177849695Z" level=info msg="StartContainer for \"08847011f586f9fd17cb3fd890ce2b110cdbb9f7a0d04228b2ab77353afae252\" returns successfully" Nov 8 00:38:24.200021 containerd[1577]: time="2025-11-08T00:38:24.198209876Z" level=info msg="StartContainer for \"b17b486e0eb9e28fdcd91a7fbaa86bc0661e5b351536c354f63a93888b94c0dd\" returns successfully" Nov 8 00:38:24.211455 containerd[1577]: time="2025-11-08T00:38:24.211404195Z" level=info msg="StartContainer for \"adc0eaf96f26717e26bc2713d591568d7b1bca9453646b5dddc96e210fb48eb4\" returns successfully" Nov 8 00:38:24.266168 kubelet[2306]: I1108 00:38:24.265552 2306 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-26" Nov 8 00:38:24.749168 kubelet[2306]: E1108 00:38:24.748824 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-26\" not found" node="172-239-57-26" Nov 8 00:38:24.749168 kubelet[2306]: E1108 00:38:24.748939 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:24.750941 kubelet[2306]: E1108 00:38:24.750870 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-26\" not found" node="172-239-57-26" Nov 8 00:38:24.751020 kubelet[2306]: E1108 00:38:24.750997 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:24.757769 kubelet[2306]: E1108 00:38:24.757739 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-26\" not found" node="172-239-57-26" Nov 8 00:38:24.758162 kubelet[2306]: E1108 00:38:24.757838 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:25.592761 kubelet[2306]: I1108 00:38:25.592717 2306 kubelet_node_status.go:78] "Successfully registered node" node="172-239-57-26" Nov 8 00:38:25.600010 kubelet[2306]: I1108 00:38:25.599988 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:25.653518 kubelet[2306]: E1108 00:38:25.653462 2306 kubelet.go:3196] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:25.653518 kubelet[2306]: I1108 00:38:25.653481 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:25.691384 kubelet[2306]: I1108 00:38:25.691346 2306 apiserver.go:52] "Watching apiserver" Nov 8 00:38:25.701074 kubelet[2306]: I1108 00:38:25.701037 2306 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:38:25.711857 kubelet[2306]: E1108 00:38:25.711304 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-57-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:25.711857 kubelet[2306]: I1108 00:38:25.711332 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:25.712837 kubelet[2306]: E1108 00:38:25.712818 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-57-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:25.758237 kubelet[2306]: I1108 00:38:25.758207 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:25.759157 kubelet[2306]: I1108 00:38:25.758621 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:25.761688 kubelet[2306]: E1108 00:38:25.761292 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-57-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:25.761688 kubelet[2306]: E1108 00:38:25.761486 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:25.761998 kubelet[2306]: E1108 00:38:25.761909 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-57-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:25.762192 kubelet[2306]: E1108 00:38:25.762028 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:27.343896 systemd[1]: Reloading requested from client PID 2574 ('systemctl') (unit session-7.scope)... Nov 8 00:38:27.343921 systemd[1]: Reloading... Nov 8 00:38:27.448188 zram_generator::config[2614]: No configuration found. Nov 8 00:38:27.496743 kubelet[2306]: I1108 00:38:27.496302 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:27.502729 kubelet[2306]: E1108 00:38:27.502679 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:27.585780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:38:27.661778 systemd[1]: Reloading finished in 317 ms. Nov 8 00:38:27.701494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:38:27.715592 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:38:27.715956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:27.725405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:38:27.875285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:27.880647 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:38:27.930361 kubelet[2675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:38:27.930361 kubelet[2675]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:38:27.930361 kubelet[2675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:38:27.930361 kubelet[2675]: I1108 00:38:27.930247 2675 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:38:27.937865 kubelet[2675]: I1108 00:38:27.937826 2675 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:38:27.937865 kubelet[2675]: I1108 00:38:27.937852 2675 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:38:27.938065 kubelet[2675]: I1108 00:38:27.938008 2675 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:38:27.939005 kubelet[2675]: I1108 00:38:27.938959 2675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:38:27.940807 kubelet[2675]: I1108 00:38:27.940760 2675 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:38:27.944046 kubelet[2675]: E1108 00:38:27.943953 2675 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:38:27.944046 kubelet[2675]: I1108 00:38:27.943979 2675 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:38:27.949185 kubelet[2675]: I1108 00:38:27.949096 2675 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:38:27.950193 kubelet[2675]: I1108 00:38:27.949653 2675 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:38:27.950255 kubelet[2675]: I1108 00:38:27.949678 2675 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-57-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:38:27.950255 kubelet[2675]: I1108 00:38:27.950221 2675 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:38:27.950255 kubelet[2675]: I1108 00:38:27.950239 2675 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:38:27.950434 kubelet[2675]: I1108 00:38:27.950284 2675 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:38:27.950434 kubelet[2675]: I1108 00:38:27.950424 2675 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:38:27.950524 kubelet[2675]: I1108 00:38:27.950444 2675 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:38:27.950524 kubelet[2675]: I1108 00:38:27.950460 2675 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:38:27.950524 kubelet[2675]: I1108 00:38:27.950469 2675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:38:27.952586 kubelet[2675]: I1108 00:38:27.952560 2675 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:38:27.952872 kubelet[2675]: I1108 00:38:27.952839 2675 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:38:27.955169 kubelet[2675]: I1108 00:38:27.953202 2675 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:38:27.955169 kubelet[2675]: I1108 00:38:27.953234 2675 server.go:1287] "Started kubelet" Nov 8 00:38:27.956111 kubelet[2675]: I1108 00:38:27.956082 2675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:38:27.974036 kubelet[2675]: I1108 00:38:27.972697 2675 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:38:27.974036 kubelet[2675]: I1108 00:38:27.974012 2675 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:38:27.978202 kubelet[2675]: I1108 00:38:27.977370 2675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:38:27.978202 kubelet[2675]: I1108 00:38:27.977681 2675 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:38:27.978202 kubelet[2675]: I1108 00:38:27.977994 2675 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:38:27.980759 kubelet[2675]: I1108 00:38:27.980741 2675 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:38:27.981411 kubelet[2675]: E1108 00:38:27.981391 2675 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-57-26\" not found" Nov 8 00:38:27.982329 kubelet[2675]: I1108 00:38:27.982287 2675 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:38:27.982489 kubelet[2675]: I1108 00:38:27.982450 2675 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:38:27.984191 kubelet[2675]: I1108 00:38:27.983814 2675 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:38:27.985977 kubelet[2675]: I1108 00:38:27.985942 2675 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:38:27.987865 kubelet[2675]: I1108 00:38:27.987849 2675 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:38:27.990282 kubelet[2675]: I1108 00:38:27.990256 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:38:27.991815 kubelet[2675]: I1108 00:38:27.991797 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:38:27.991902 kubelet[2675]: I1108 00:38:27.991889 2675 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:38:27.991967 kubelet[2675]: I1108 00:38:27.991957 2675 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:38:27.992009 kubelet[2675]: I1108 00:38:27.992001 2675 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:38:27.992172 kubelet[2675]: E1108 00:38:27.992100 2675 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:38:28.060831 kubelet[2675]: I1108 00:38:28.060766 2675 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:38:28.060948 kubelet[2675]: I1108 00:38:28.060933 2675 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:38:28.061003 kubelet[2675]: I1108 00:38:28.060994 2675 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:38:28.061236 kubelet[2675]: I1108 00:38:28.061217 2675 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:38:28.061306 kubelet[2675]: I1108 00:38:28.061283 2675 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:38:28.061350 kubelet[2675]: I1108 00:38:28.061342 2675 policy_none.go:49] "None policy: Start" Nov 8 00:38:28.061396 kubelet[2675]: I1108 00:38:28.061388 2675 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:38:28.061441 kubelet[2675]: I1108 00:38:28.061433 2675 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:38:28.061669 kubelet[2675]: I1108 00:38:28.061651 2675 state_mem.go:75] "Updated machine memory state" Nov 8 00:38:28.063471 kubelet[2675]: I1108 00:38:28.063454 2675 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:38:28.063692 kubelet[2675]: I1108 00:38:28.063680 2675 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:38:28.063760 kubelet[2675]: I1108 00:38:28.063737 2675 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:38:28.065354 kubelet[2675]: I1108 00:38:28.065329 2675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:38:28.070172 kubelet[2675]: E1108 00:38:28.069086 2675 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:38:28.093102 kubelet[2675]: I1108 00:38:28.093077 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:28.093907 kubelet[2675]: I1108 00:38:28.093445 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:28.094059 kubelet[2675]: I1108 00:38:28.093640 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:28.101697 kubelet[2675]: E1108 00:38:28.101668 2675 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-57-26\" already exists" pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:28.167490 kubelet[2675]: I1108 00:38:28.167452 2675 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-26" Nov 8 00:38:28.178281 kubelet[2675]: I1108 00:38:28.177376 2675 kubelet_node_status.go:124] "Node was previously registered" node="172-239-57-26" Nov 8 00:38:28.178281 kubelet[2675]: I1108 00:38:28.177565 2675 kubelet_node_status.go:78] "Successfully registered node" node="172-239-57-26" Nov 8 00:38:28.189307 kubelet[2675]: I1108 00:38:28.188764 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58b6c6d7a711551a4f150de0e2b1adc9-kubeconfig\") pod \"kube-scheduler-172-239-57-26\" (UID: \"58b6c6d7a711551a4f150de0e2b1adc9\") " pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:28.189307 kubelet[2675]: I1108 00:38:28.188883 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11a027eb797f5527392a4f27109df3bd-ca-certs\") pod \"kube-apiserver-172-239-57-26\" (UID: \"11a027eb797f5527392a4f27109df3bd\") " pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:28.189307 kubelet[2675]: I1108 00:38:28.188911 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-ca-certs\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:28.189307 kubelet[2675]: I1108 00:38:28.188932 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-kubeconfig\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:28.189307 kubelet[2675]: I1108 00:38:28.188954 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:28.189583 kubelet[2675]: I1108 00:38:28.188973 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11a027eb797f5527392a4f27109df3bd-k8s-certs\") pod \"kube-apiserver-172-239-57-26\" (UID: \"11a027eb797f5527392a4f27109df3bd\") " pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:28.189583 kubelet[2675]: I1108 00:38:28.188989 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11a027eb797f5527392a4f27109df3bd-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-57-26\" (UID: \"11a027eb797f5527392a4f27109df3bd\") " pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:28.189583 kubelet[2675]: I1108 00:38:28.189006 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-flexvolume-dir\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:28.189583 kubelet[2675]: I1108 00:38:28.189023 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4371d2c097ce5192436053fe25a1855c-k8s-certs\") pod \"kube-controller-manager-172-239-57-26\" (UID: \"4371d2c097ce5192436053fe25a1855c\") " pod="kube-system/kube-controller-manager-172-239-57-26" Nov 8 00:38:28.399424 kubelet[2675]: E1108 00:38:28.398195 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:28.399424 kubelet[2675]: E1108 00:38:28.399237 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:28.403121 kubelet[2675]: E1108 00:38:28.402981 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:28.959346 kubelet[2675]: I1108 00:38:28.959303 2675 apiserver.go:52] "Watching apiserver" Nov 8 00:38:28.984272 kubelet[2675]: I1108 00:38:28.984236 2675 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:38:29.031038 kubelet[2675]: I1108 00:38:29.030435 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:29.031320 kubelet[2675]: I1108 00:38:29.031305 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:29.033561 kubelet[2675]: E1108 00:38:29.032997 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:29.035554 kubelet[2675]: E1108 00:38:29.035538 2675 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-57-26\" already exists" pod="kube-system/kube-apiserver-172-239-57-26" Nov 8 00:38:29.035755 kubelet[2675]: E1108 00:38:29.035740 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:29.038684 kubelet[2675]: E1108 00:38:29.038669 2675 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-57-26\" already exists" pod="kube-system/kube-scheduler-172-239-57-26" Nov 8 00:38:29.038848 kubelet[2675]: E1108 00:38:29.038834 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:29.062080 kubelet[2675]: I1108 00:38:29.062044 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-57-26" podStartSLOduration=1.062033321 podStartE2EDuration="1.062033321s" podCreationTimestamp="2025-11-08 00:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:38:29.05490458 +0000 UTC m=+1.170374666" watchObservedRunningTime="2025-11-08 00:38:29.062033321 +0000 UTC m=+1.177503417" Nov 8 00:38:29.071471 kubelet[2675]: I1108 00:38:29.071206 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-57-26" podStartSLOduration=2.071061654 podStartE2EDuration="2.071061654s" podCreationTimestamp="2025-11-08 00:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:38:29.071038364 +0000 UTC m=+1.186508480" watchObservedRunningTime="2025-11-08 00:38:29.071061654 +0000 UTC m=+1.186531740" Nov 8 00:38:29.071957 kubelet[2675]: I1108 00:38:29.071905 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-57-26" podStartSLOduration=1.071763895 podStartE2EDuration="1.071763895s" podCreationTimestamp="2025-11-08 00:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:38:29.062249301 +0000 UTC m=+1.177719397" watchObservedRunningTime="2025-11-08 00:38:29.071763895 +0000 UTC m=+1.187233991" Nov 8 00:38:30.035238 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:38:30.035732 kubelet[2675]: E1108 00:38:30.035693 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:30.039023 kubelet[2675]: E1108 00:38:30.036839 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:31.037504 kubelet[2675]: E1108 00:38:31.037440 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:34.113884 kubelet[2675]: I1108 00:38:34.113504 2675 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:38:34.116964 containerd[1577]: time="2025-11-08T00:38:34.116906658Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:38:34.117920 kubelet[2675]: I1108 00:38:34.117265 2675 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:38:35.239317 kubelet[2675]: I1108 00:38:35.239045 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26986205-ffe4-450e-84d8-0b65ecc26a08-lib-modules\") pod \"kube-proxy-pl26v\" (UID: \"26986205-ffe4-450e-84d8-0b65ecc26a08\") " pod="kube-system/kube-proxy-pl26v" Nov 8 00:38:35.239317 kubelet[2675]: I1108 00:38:35.239098 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26986205-ffe4-450e-84d8-0b65ecc26a08-kube-proxy\") pod \"kube-proxy-pl26v\" (UID: \"26986205-ffe4-450e-84d8-0b65ecc26a08\") " pod="kube-system/kube-proxy-pl26v" Nov 8 00:38:35.239317 kubelet[2675]: I1108 00:38:35.239119 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26986205-ffe4-450e-84d8-0b65ecc26a08-xtables-lock\") pod \"kube-proxy-pl26v\" (UID: \"26986205-ffe4-450e-84d8-0b65ecc26a08\") " pod="kube-system/kube-proxy-pl26v" Nov 8 00:38:35.239317 kubelet[2675]: I1108 00:38:35.239172 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dhb\" (UniqueName: \"kubernetes.io/projected/26986205-ffe4-450e-84d8-0b65ecc26a08-kube-api-access-z4dhb\") pod \"kube-proxy-pl26v\" (UID: \"26986205-ffe4-450e-84d8-0b65ecc26a08\") " pod="kube-system/kube-proxy-pl26v" Nov 8 00:38:35.340534 kubelet[2675]: I1108 00:38:35.339826 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x782d\" (UniqueName: \"kubernetes.io/projected/49684fda-5b0c-443a-ad79-e18237097edb-kube-api-access-x782d\") pod \"tigera-operator-7dcd859c48-mlk2l\" (UID: \"49684fda-5b0c-443a-ad79-e18237097edb\") " pod="tigera-operator/tigera-operator-7dcd859c48-mlk2l" Nov 8 00:38:35.340534 kubelet[2675]: I1108 00:38:35.339874 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49684fda-5b0c-443a-ad79-e18237097edb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mlk2l\" (UID: \"49684fda-5b0c-443a-ad79-e18237097edb\") " pod="tigera-operator/tigera-operator-7dcd859c48-mlk2l" Nov 8 00:38:35.474278 kubelet[2675]: E1108 00:38:35.474190 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:35.475415 containerd[1577]: time="2025-11-08T00:38:35.475380871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pl26v,Uid:26986205-ffe4-450e-84d8-0b65ecc26a08,Namespace:kube-system,Attempt:0,}" Nov 8 00:38:35.499492 containerd[1577]: time="2025-11-08T00:38:35.499057620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:35.499492 containerd[1577]: time="2025-11-08T00:38:35.499121120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:35.499492 containerd[1577]: time="2025-11-08T00:38:35.499179180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:35.499492 containerd[1577]: time="2025-11-08T00:38:35.499315009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:35.539814 containerd[1577]: time="2025-11-08T00:38:35.539534638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mlk2l,Uid:49684fda-5b0c-443a-ad79-e18237097edb,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:38:35.550615 containerd[1577]: time="2025-11-08T00:38:35.550558483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pl26v,Uid:26986205-ffe4-450e-84d8-0b65ecc26a08,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e8b683017c1a3c78bd2bda7243bc1b8372253b4bf24d08011534f6b3f89d1fb\"" Nov 8 00:38:35.551855 kubelet[2675]: E1108 00:38:35.551472 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:35.555515 containerd[1577]: time="2025-11-08T00:38:35.555494030Z" level=info msg="CreateContainer within sandbox \"2e8b683017c1a3c78bd2bda7243bc1b8372253b4bf24d08011534f6b3f89d1fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:38:35.567042 containerd[1577]: time="2025-11-08T00:38:35.567017155Z" level=info msg="CreateContainer within sandbox \"2e8b683017c1a3c78bd2bda7243bc1b8372253b4bf24d08011534f6b3f89d1fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cfed531680b2f279072d15d0940795be9810103a870d1663cf86a3c07db3bfbc\"" Nov 8 00:38:35.570842 containerd[1577]: time="2025-11-08T00:38:35.569734543Z" level=info msg="StartContainer for \"cfed531680b2f279072d15d0940795be9810103a870d1663cf86a3c07db3bfbc\"" Nov 8 00:38:35.586791 containerd[1577]: time="2025-11-08T00:38:35.586734314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:35.587349 containerd[1577]: time="2025-11-08T00:38:35.587314244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:35.587481 containerd[1577]: time="2025-11-08T00:38:35.587453924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:35.588440 containerd[1577]: time="2025-11-08T00:38:35.588372654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:35.668525 containerd[1577]: time="2025-11-08T00:38:35.668460353Z" level=info msg="StartContainer for \"cfed531680b2f279072d15d0940795be9810103a870d1663cf86a3c07db3bfbc\" returns successfully" Nov 8 00:38:35.678722 containerd[1577]: time="2025-11-08T00:38:35.678681867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mlk2l,Uid:49684fda-5b0c-443a-ad79-e18237097edb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7fb095c61404bdbd6702c9248ec5a0ba2731eb96e32f73e060901c02899f1e95\"" Nov 8 00:38:35.682282 containerd[1577]: time="2025-11-08T00:38:35.682240975Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:38:36.058423 kubelet[2675]: E1108 00:38:36.058005 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:37.086301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190272185.mount: Deactivated successfully. Nov 8 00:38:37.560302 containerd[1577]: time="2025-11-08T00:38:37.560234362Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:37.561110 containerd[1577]: time="2025-11-08T00:38:37.561027142Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:38:37.562166 containerd[1577]: time="2025-11-08T00:38:37.561656912Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:37.565568 containerd[1577]: time="2025-11-08T00:38:37.565545831Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:37.567453 containerd[1577]: time="2025-11-08T00:38:37.566030141Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.883754416s" Nov 8 00:38:37.567453 containerd[1577]: time="2025-11-08T00:38:37.567304621Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:38:37.570377 containerd[1577]: time="2025-11-08T00:38:37.570339420Z" level=info msg="CreateContainer within sandbox \"7fb095c61404bdbd6702c9248ec5a0ba2731eb96e32f73e060901c02899f1e95\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:38:37.580955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968771422.mount: Deactivated successfully. Nov 8 00:38:37.591829 containerd[1577]: time="2025-11-08T00:38:37.591783395Z" level=info msg="CreateContainer within sandbox \"7fb095c61404bdbd6702c9248ec5a0ba2731eb96e32f73e060901c02899f1e95\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e3044a6e2174b69066624cbee4e71e582079a8956ce1d92fd3354d42ab7a9bb\"" Nov 8 00:38:37.592963 containerd[1577]: time="2025-11-08T00:38:37.592266224Z" level=info msg="StartContainer for \"1e3044a6e2174b69066624cbee4e71e582079a8956ce1d92fd3354d42ab7a9bb\"" Nov 8 00:38:37.624487 systemd[1]: run-containerd-runc-k8s.io-1e3044a6e2174b69066624cbee4e71e582079a8956ce1d92fd3354d42ab7a9bb-runc.9kCI1w.mount: Deactivated successfully. Nov 8 00:38:37.651313 containerd[1577]: time="2025-11-08T00:38:37.651161649Z" level=info msg="StartContainer for \"1e3044a6e2174b69066624cbee4e71e582079a8956ce1d92fd3354d42ab7a9bb\" returns successfully" Nov 8 00:38:37.906578 kubelet[2675]: E1108 00:38:37.906471 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:37.918794 kubelet[2675]: I1108 00:38:37.918626 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pl26v" podStartSLOduration=2.918613916 podStartE2EDuration="2.918613916s" podCreationTimestamp="2025-11-08 00:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:38:36.070773634 +0000 UTC m=+8.186243750" watchObservedRunningTime="2025-11-08 00:38:37.918613916 +0000 UTC m=+10.034084012" Nov 8 00:38:38.063463 kubelet[2675]: E1108 00:38:38.062897 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:38.083614 kubelet[2675]: I1108 00:38:38.083228 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mlk2l" podStartSLOduration=1.195269739 podStartE2EDuration="3.083216672s" podCreationTimestamp="2025-11-08 00:38:35 +0000 UTC" firstStartedPulling="2025-11-08 00:38:35.680359337 +0000 UTC m=+7.795829423" lastFinishedPulling="2025-11-08 00:38:37.56830627 +0000 UTC m=+9.683776356" observedRunningTime="2025-11-08 00:38:38.074713013 +0000 UTC m=+10.190183109" watchObservedRunningTime="2025-11-08 00:38:38.083216672 +0000 UTC m=+10.198686758" Nov 8 00:38:38.257068 kubelet[2675]: E1108 00:38:38.257025 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:38.731586 kubelet[2675]: E1108 00:38:38.731535 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:39.064696 kubelet[2675]: E1108 00:38:39.064541 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:39.065155 kubelet[2675]: E1108 00:38:39.065062 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:41.392689 sudo[1813]: pam_unix(sudo:session): session closed for user root Nov 8 00:38:41.447522 sshd[1809]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:41.453660 systemd[1]: sshd@6-172.239.57.26:22-147.75.109.163:44632.service: Deactivated successfully. Nov 8 00:38:41.463426 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:38:41.464828 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:38:41.469543 systemd-logind[1553]: Removed session 7. Nov 8 00:38:45.142176 update_engine[1554]: I20251108 00:38:45.141209 1554 update_attempter.cc:509] Updating boot flags... Nov 8 00:38:45.220180 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (3083) Nov 8 00:38:45.348262 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (3086) Nov 8 00:38:46.922679 kubelet[2675]: I1108 00:38:46.922644 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb4a0a9-9773-48ef-84e2-72ed3c6661e4-tigera-ca-bundle\") pod \"calico-typha-79ff6687b-ftjsf\" (UID: \"6eb4a0a9-9773-48ef-84e2-72ed3c6661e4\") " pod="calico-system/calico-typha-79ff6687b-ftjsf" Nov 8 00:38:46.923274 kubelet[2675]: I1108 00:38:46.923188 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6eb4a0a9-9773-48ef-84e2-72ed3c6661e4-typha-certs\") pod \"calico-typha-79ff6687b-ftjsf\" (UID: \"6eb4a0a9-9773-48ef-84e2-72ed3c6661e4\") " pod="calico-system/calico-typha-79ff6687b-ftjsf" Nov 8 00:38:46.923274 kubelet[2675]: I1108 00:38:46.923228 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qv7t\" (UniqueName: \"kubernetes.io/projected/6eb4a0a9-9773-48ef-84e2-72ed3c6661e4-kube-api-access-7qv7t\") pod \"calico-typha-79ff6687b-ftjsf\" (UID: \"6eb4a0a9-9773-48ef-84e2-72ed3c6661e4\") " pod="calico-system/calico-typha-79ff6687b-ftjsf" Nov 8 00:38:47.124683 kubelet[2675]: I1108 00:38:47.124594 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-cni-log-dir\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.124683 kubelet[2675]: I1108 00:38:47.124640 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-policysync\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.124683 kubelet[2675]: I1108 00:38:47.124661 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-cni-net-dir\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.124683 kubelet[2675]: I1108 00:38:47.124677 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-cni-bin-dir\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125208 kubelet[2675]: I1108 00:38:47.124695 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3028badc-4e38-44e6-8577-25cb91b98f73-tigera-ca-bundle\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125208 kubelet[2675]: I1108 00:38:47.124712 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-var-lib-calico\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125208 kubelet[2675]: I1108 00:38:47.124727 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fv5h\" (UniqueName: \"kubernetes.io/projected/3028badc-4e38-44e6-8577-25cb91b98f73-kube-api-access-9fv5h\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125208 kubelet[2675]: I1108 00:38:47.124745 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-xtables-lock\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125208 kubelet[2675]: I1108 00:38:47.124759 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-flexvol-driver-host\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125368 kubelet[2675]: I1108 00:38:47.124773 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-lib-modules\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125368 kubelet[2675]: I1108 00:38:47.124792 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3028badc-4e38-44e6-8577-25cb91b98f73-node-certs\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.125368 kubelet[2675]: I1108 00:38:47.124837 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3028badc-4e38-44e6-8577-25cb91b98f73-var-run-calico\") pod \"calico-node-bbq46\" (UID: \"3028badc-4e38-44e6-8577-25cb91b98f73\") " pod="calico-system/calico-node-bbq46" Nov 8 00:38:47.166495 kubelet[2675]: E1108 00:38:47.166468 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:47.166889 containerd[1577]: time="2025-11-08T00:38:47.166830582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79ff6687b-ftjsf,Uid:6eb4a0a9-9773-48ef-84e2-72ed3c6661e4,Namespace:calico-system,Attempt:0,}" Nov 8 00:38:47.189832 containerd[1577]: time="2025-11-08T00:38:47.189604155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:47.189832 containerd[1577]: time="2025-11-08T00:38:47.189663455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:47.189832 containerd[1577]: time="2025-11-08T00:38:47.189686325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:47.190418 containerd[1577]: time="2025-11-08T00:38:47.190194605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:47.240523 kubelet[2675]: E1108 00:38:47.237182 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:38:47.245536 kubelet[2675]: E1108 00:38:47.245518 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.245694 kubelet[2675]: W1108 00:38:47.245677 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.245875 kubelet[2675]: E1108 00:38:47.245817 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.251988 kubelet[2675]: E1108 00:38:47.251710 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.252065 kubelet[2675]: W1108 00:38:47.252050 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.252251 kubelet[2675]: E1108 00:38:47.252237 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.260452 kubelet[2675]: E1108 00:38:47.260435 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.260569 kubelet[2675]: W1108 00:38:47.260553 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.260718 kubelet[2675]: E1108 00:38:47.260703 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.319770 containerd[1577]: time="2025-11-08T00:38:47.319714049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79ff6687b-ftjsf,Uid:6eb4a0a9-9773-48ef-84e2-72ed3c6661e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"880fe729505953e4631fa4fde9f860a2c2f04e36bd78a37bd5958c69283a7670\"" Nov 8 00:38:47.322794 kubelet[2675]: E1108 00:38:47.322515 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:47.323536 kubelet[2675]: E1108 00:38:47.322423 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.323684 kubelet[2675]: W1108 00:38:47.323667 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.323784 kubelet[2675]: E1108 00:38:47.323770 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.324797 kubelet[2675]: E1108 00:38:47.324784 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.324963 kubelet[2675]: W1108 00:38:47.324947 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.325978 containerd[1577]: time="2025-11-08T00:38:47.325490993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:38:47.326734 kubelet[2675]: E1108 00:38:47.326501 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.328343 kubelet[2675]: E1108 00:38:47.328269 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.328343 kubelet[2675]: W1108 00:38:47.328284 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.328343 kubelet[2675]: E1108 00:38:47.328296 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.329489 kubelet[2675]: E1108 00:38:47.329290 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.329489 kubelet[2675]: W1108 00:38:47.329302 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.329489 kubelet[2675]: E1108 00:38:47.329312 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.330173 kubelet[2675]: E1108 00:38:47.330059 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.330173 kubelet[2675]: W1108 00:38:47.330073 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.330173 kubelet[2675]: E1108 00:38:47.330082 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.330769 kubelet[2675]: E1108 00:38:47.330627 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.330769 kubelet[2675]: W1108 00:38:47.330700 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.330769 kubelet[2675]: E1108 00:38:47.330713 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.331368 kubelet[2675]: E1108 00:38:47.331322 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.331534 kubelet[2675]: W1108 00:38:47.331446 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.331534 kubelet[2675]: E1108 00:38:47.331465 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.331933 kubelet[2675]: E1108 00:38:47.331902 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.332041 kubelet[2675]: W1108 00:38:47.331999 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.332041 kubelet[2675]: E1108 00:38:47.332013 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.333054 kubelet[2675]: E1108 00:38:47.332908 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.333054 kubelet[2675]: W1108 00:38:47.332919 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.333054 kubelet[2675]: E1108 00:38:47.332930 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.333572 kubelet[2675]: E1108 00:38:47.333553 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.333731 kubelet[2675]: W1108 00:38:47.333642 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.333731 kubelet[2675]: E1108 00:38:47.333655 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.334168 kubelet[2675]: E1108 00:38:47.334077 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.334168 kubelet[2675]: W1108 00:38:47.334088 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.334168 kubelet[2675]: E1108 00:38:47.334096 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.334832 kubelet[2675]: E1108 00:38:47.334594 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.334832 kubelet[2675]: W1108 00:38:47.334633 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.334832 kubelet[2675]: E1108 00:38:47.334646 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.335364 kubelet[2675]: E1108 00:38:47.335228 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.335364 kubelet[2675]: W1108 00:38:47.335238 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.335364 kubelet[2675]: E1108 00:38:47.335295 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.335779 kubelet[2675]: E1108 00:38:47.335728 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.335779 kubelet[2675]: W1108 00:38:47.335739 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.335920 kubelet[2675]: E1108 00:38:47.335747 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.336370 kubelet[2675]: E1108 00:38:47.336255 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.336370 kubelet[2675]: W1108 00:38:47.336289 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.336370 kubelet[2675]: E1108 00:38:47.336304 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.336757 kubelet[2675]: E1108 00:38:47.336745 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.336868 kubelet[2675]: W1108 00:38:47.336834 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.337010 kubelet[2675]: E1108 00:38:47.336929 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.337511 kubelet[2675]: E1108 00:38:47.337417 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.337511 kubelet[2675]: W1108 00:38:47.337429 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.337511 kubelet[2675]: E1108 00:38:47.337438 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.338157 kubelet[2675]: E1108 00:38:47.337882 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.338157 kubelet[2675]: W1108 00:38:47.337893 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.338157 kubelet[2675]: E1108 00:38:47.337901 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.338372 kubelet[2675]: E1108 00:38:47.338310 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.338372 kubelet[2675]: W1108 00:38:47.338321 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.338372 kubelet[2675]: E1108 00:38:47.338329 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.338714 kubelet[2675]: E1108 00:38:47.338618 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.338714 kubelet[2675]: W1108 00:38:47.338629 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.338714 kubelet[2675]: E1108 00:38:47.338637 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.340338 kubelet[2675]: E1108 00:38:47.339883 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.340338 kubelet[2675]: W1108 00:38:47.339894 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.340338 kubelet[2675]: E1108 00:38:47.339903 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.340338 kubelet[2675]: I1108 00:38:47.339929 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e31263c-8cf9-4e4b-a04e-7c52af3f73c1-kubelet-dir\") pod \"csi-node-driver-mdrsj\" (UID: \"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1\") " pod="calico-system/csi-node-driver-mdrsj" Nov 8 00:38:47.340651 kubelet[2675]: E1108 00:38:47.340554 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.340651 kubelet[2675]: W1108 00:38:47.340564 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.340651 kubelet[2675]: E1108 00:38:47.340577 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.341572 kubelet[2675]: E1108 00:38:47.341444 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.341572 kubelet[2675]: W1108 00:38:47.341496 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.342114 kubelet[2675]: E1108 00:38:47.341797 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.342289 kubelet[2675]: E1108 00:38:47.342276 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.342511 kubelet[2675]: W1108 00:38:47.342377 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.342511 kubelet[2675]: E1108 00:38:47.342393 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.342511 kubelet[2675]: I1108 00:38:47.342420 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e31263c-8cf9-4e4b-a04e-7c52af3f73c1-registration-dir\") pod \"csi-node-driver-mdrsj\" (UID: \"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1\") " pod="calico-system/csi-node-driver-mdrsj" Nov 8 00:38:47.343285 kubelet[2675]: E1108 00:38:47.343079 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.343285 kubelet[2675]: W1108 00:38:47.343091 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.343285 kubelet[2675]: E1108 00:38:47.343104 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.343706 kubelet[2675]: E1108 00:38:47.343672 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.343706 kubelet[2675]: W1108 00:38:47.343682 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.343995 kubelet[2675]: E1108 00:38:47.343888 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.344086 kubelet[2675]: E1108 00:38:47.344075 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.344268 kubelet[2675]: W1108 00:38:47.344124 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.344268 kubelet[2675]: E1108 00:38:47.344221 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.344856 kubelet[2675]: I1108 00:38:47.344528 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e31263c-8cf9-4e4b-a04e-7c52af3f73c1-varrun\") pod \"csi-node-driver-mdrsj\" (UID: \"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1\") " pod="calico-system/csi-node-driver-mdrsj" Nov 8 00:38:47.345057 kubelet[2675]: E1108 00:38:47.344957 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.345057 kubelet[2675]: W1108 00:38:47.344973 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.345057 kubelet[2675]: E1108 00:38:47.345003 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.345476 kubelet[2675]: E1108 00:38:47.345376 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.345476 kubelet[2675]: W1108 00:38:47.345386 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.345476 kubelet[2675]: E1108 00:38:47.345398 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.345820 kubelet[2675]: E1108 00:38:47.345691 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.345820 kubelet[2675]: W1108 00:38:47.345701 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.345820 kubelet[2675]: E1108 00:38:47.345709 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.345820 kubelet[2675]: I1108 00:38:47.345743 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e31263c-8cf9-4e4b-a04e-7c52af3f73c1-socket-dir\") pod \"csi-node-driver-mdrsj\" (UID: \"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1\") " pod="calico-system/csi-node-driver-mdrsj" Nov 8 00:38:47.346520 kubelet[2675]: E1108 00:38:47.346291 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.346520 kubelet[2675]: W1108 00:38:47.346302 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.346520 kubelet[2675]: E1108 00:38:47.346315 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.346520 kubelet[2675]: I1108 00:38:47.346328 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6sb\" (UniqueName: \"kubernetes.io/projected/3e31263c-8cf9-4e4b-a04e-7c52af3f73c1-kube-api-access-zf6sb\") pod \"csi-node-driver-mdrsj\" (UID: \"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1\") " pod="calico-system/csi-node-driver-mdrsj" Nov 8 00:38:47.346920 kubelet[2675]: E1108 00:38:47.346823 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.346920 kubelet[2675]: W1108 00:38:47.346835 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.346920 kubelet[2675]: E1108 00:38:47.346861 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.347350 kubelet[2675]: E1108 00:38:47.347243 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.347350 kubelet[2675]: W1108 00:38:47.347257 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.347580 kubelet[2675]: E1108 00:38:47.347427 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.347676 kubelet[2675]: E1108 00:38:47.347666 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.347764 kubelet[2675]: W1108 00:38:47.347715 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.347764 kubelet[2675]: E1108 00:38:47.347727 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.348017 kubelet[2675]: E1108 00:38:47.347984 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.348017 kubelet[2675]: W1108 00:38:47.347994 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.348017 kubelet[2675]: E1108 00:38:47.348002 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.356553 kubelet[2675]: E1108 00:38:47.356189 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:47.356837 containerd[1577]: time="2025-11-08T00:38:47.356810851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bbq46,Uid:3028badc-4e38-44e6-8577-25cb91b98f73,Namespace:calico-system,Attempt:0,}" Nov 8 00:38:47.378489 containerd[1577]: time="2025-11-08T00:38:47.378296683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:47.378489 containerd[1577]: time="2025-11-08T00:38:47.378353853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:47.378489 containerd[1577]: time="2025-11-08T00:38:47.378365493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:47.379223 containerd[1577]: time="2025-11-08T00:38:47.379086723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:47.428113 containerd[1577]: time="2025-11-08T00:38:47.427985982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bbq46,Uid:3028badc-4e38-44e6-8577-25cb91b98f73,Namespace:calico-system,Attempt:0,} returns sandbox id \"74610eb876eb92946674a9a45a81ae5ec836ba558ed8f9010bf93791348f7f21\"" Nov 8 00:38:47.429668 kubelet[2675]: E1108 00:38:47.429519 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:47.447880 kubelet[2675]: E1108 00:38:47.447730 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.447880 kubelet[2675]: W1108 00:38:47.447746 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.447880 kubelet[2675]: E1108 00:38:47.447763 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.448418 kubelet[2675]: E1108 00:38:47.448403 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.448654 kubelet[2675]: W1108 00:38:47.448576 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.448654 kubelet[2675]: E1108 00:38:47.448593 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.449252 kubelet[2675]: E1108 00:38:47.449115 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.449252 kubelet[2675]: W1108 00:38:47.449171 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.449252 kubelet[2675]: E1108 00:38:47.449186 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.449712 kubelet[2675]: E1108 00:38:47.449659 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.449712 kubelet[2675]: W1108 00:38:47.449671 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.449712 kubelet[2675]: E1108 00:38:47.449698 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.450251 kubelet[2675]: E1108 00:38:47.450165 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.450251 kubelet[2675]: W1108 00:38:47.450177 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.450408 kubelet[2675]: E1108 00:38:47.450333 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.450656 kubelet[2675]: E1108 00:38:47.450607 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.450656 kubelet[2675]: W1108 00:38:47.450619 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.450894 kubelet[2675]: E1108 00:38:47.450760 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.451236 kubelet[2675]: E1108 00:38:47.451224 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.451455 kubelet[2675]: W1108 00:38:47.451319 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.451455 kubelet[2675]: E1108 00:38:47.451406 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.451853 kubelet[2675]: E1108 00:38:47.451841 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.451998 kubelet[2675]: W1108 00:38:47.451907 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.452234 kubelet[2675]: E1108 00:38:47.452208 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.452535 kubelet[2675]: E1108 00:38:47.452522 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.452730 kubelet[2675]: W1108 00:38:47.452617 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.452788 kubelet[2675]: E1108 00:38:47.452776 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.453154 kubelet[2675]: E1108 00:38:47.453045 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.453154 kubelet[2675]: W1108 00:38:47.453058 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.453154 kubelet[2675]: E1108 00:38:47.453086 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.453531 kubelet[2675]: E1108 00:38:47.453464 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.453531 kubelet[2675]: W1108 00:38:47.453476 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.453645 kubelet[2675]: E1108 00:38:47.453590 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.453902 kubelet[2675]: E1108 00:38:47.453859 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.453981 kubelet[2675]: W1108 00:38:47.453948 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.454518 kubelet[2675]: E1108 00:38:47.454386 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.454518 kubelet[2675]: W1108 00:38:47.454398 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.454518 kubelet[2675]: E1108 00:38:47.454412 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.454518 kubelet[2675]: E1108 00:38:47.454447 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.454933 kubelet[2675]: E1108 00:38:47.454817 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.454933 kubelet[2675]: W1108 00:38:47.454829 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.454933 kubelet[2675]: E1108 00:38:47.454841 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.455243 kubelet[2675]: E1108 00:38:47.455228 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.455332 kubelet[2675]: W1108 00:38:47.455300 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.455510 kubelet[2675]: E1108 00:38:47.455442 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.455800 kubelet[2675]: E1108 00:38:47.455671 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.455800 kubelet[2675]: W1108 00:38:47.455682 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.455800 kubelet[2675]: E1108 00:38:47.455712 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.456187 kubelet[2675]: E1108 00:38:47.456086 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.456187 kubelet[2675]: W1108 00:38:47.456097 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.456187 kubelet[2675]: E1108 00:38:47.456154 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.456647 kubelet[2675]: E1108 00:38:47.456517 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.456647 kubelet[2675]: W1108 00:38:47.456558 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.456923 kubelet[2675]: E1108 00:38:47.456790 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.457239 kubelet[2675]: E1108 00:38:47.457097 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.457239 kubelet[2675]: W1108 00:38:47.457107 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.457239 kubelet[2675]: E1108 00:38:47.457174 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.457458 kubelet[2675]: E1108 00:38:47.457420 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.457549 kubelet[2675]: W1108 00:38:47.457504 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.457695 kubelet[2675]: E1108 00:38:47.457669 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.459918 kubelet[2675]: E1108 00:38:47.459876 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.459918 kubelet[2675]: W1108 00:38:47.459889 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.460539 kubelet[2675]: E1108 00:38:47.460447 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.460539 kubelet[2675]: W1108 00:38:47.460459 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.460869 kubelet[2675]: E1108 00:38:47.460856 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.461036 kubelet[2675]: W1108 00:38:47.460922 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.461313 kubelet[2675]: E1108 00:38:47.461301 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.461380 kubelet[2675]: W1108 00:38:47.461369 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.461425 kubelet[2675]: E1108 00:38:47.461414 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.461583 kubelet[2675]: E1108 00:38:47.461475 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.461583 kubelet[2675]: E1108 00:38:47.461489 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.461583 kubelet[2675]: E1108 00:38:47.461497 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.463312 kubelet[2675]: E1108 00:38:47.463300 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.463455 kubelet[2675]: W1108 00:38:47.463352 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.463455 kubelet[2675]: E1108 00:38:47.463365 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:47.464935 kubelet[2675]: E1108 00:38:47.464836 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:47.464935 kubelet[2675]: W1108 00:38:47.464848 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:47.464935 kubelet[2675]: E1108 00:38:47.464857 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:48.254453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217551222.mount: Deactivated successfully. Nov 8 00:38:48.881968 containerd[1577]: time="2025-11-08T00:38:48.881841543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:48.883050 containerd[1577]: time="2025-11-08T00:38:48.882752094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:38:48.883682 containerd[1577]: time="2025-11-08T00:38:48.883620785Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:48.887076 containerd[1577]: time="2025-11-08T00:38:48.886056906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:48.887076 containerd[1577]: time="2025-11-08T00:38:48.886931747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.561410974s" Nov 8 00:38:48.887076 containerd[1577]: time="2025-11-08T00:38:48.886970907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:38:48.888335 containerd[1577]: time="2025-11-08T00:38:48.888242468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:38:48.907276 containerd[1577]: time="2025-11-08T00:38:48.907204469Z" level=info msg="CreateContainer within sandbox \"880fe729505953e4631fa4fde9f860a2c2f04e36bd78a37bd5958c69283a7670\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:38:48.916324 containerd[1577]: time="2025-11-08T00:38:48.916264995Z" level=info msg="CreateContainer within sandbox \"880fe729505953e4631fa4fde9f860a2c2f04e36bd78a37bd5958c69283a7670\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cc315d875cb3236b4bd38c31d9e7905fbd8680bd624ea9af5a2e3857fd713b40\"" Nov 8 00:38:48.919556 containerd[1577]: time="2025-11-08T00:38:48.918366796Z" level=info msg="StartContainer for \"cc315d875cb3236b4bd38c31d9e7905fbd8680bd624ea9af5a2e3857fd713b40\"" Nov 8 00:38:48.992421 kubelet[2675]: E1108 00:38:48.992379 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:38:49.003048 containerd[1577]: time="2025-11-08T00:38:49.002987440Z" level=info msg="StartContainer for \"cc315d875cb3236b4bd38c31d9e7905fbd8680bd624ea9af5a2e3857fd713b40\" returns successfully" Nov 8 00:38:49.116804 kubelet[2675]: E1108 00:38:49.116728 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:49.153170 kubelet[2675]: E1108 00:38:49.152235 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.153170 kubelet[2675]: W1108 00:38:49.152256 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.153170 kubelet[2675]: E1108 00:38:49.152275 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.154356 kubelet[2675]: E1108 00:38:49.154340 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.154597 kubelet[2675]: W1108 00:38:49.154471 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.154597 kubelet[2675]: E1108 00:38:49.154495 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.154929 kubelet[2675]: E1108 00:38:49.154833 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.154929 kubelet[2675]: W1108 00:38:49.154844 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.154929 kubelet[2675]: E1108 00:38:49.154854 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.155287 kubelet[2675]: E1108 00:38:49.155182 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.155287 kubelet[2675]: W1108 00:38:49.155193 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.155287 kubelet[2675]: E1108 00:38:49.155203 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.155611 kubelet[2675]: E1108 00:38:49.155541 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.155611 kubelet[2675]: W1108 00:38:49.155552 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.155611 kubelet[2675]: E1108 00:38:49.155560 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.156745 kubelet[2675]: E1108 00:38:49.156191 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.156745 kubelet[2675]: W1108 00:38:49.156204 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.156745 kubelet[2675]: E1108 00:38:49.156213 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.156979 kubelet[2675]: E1108 00:38:49.156967 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.157222 kubelet[2675]: W1108 00:38:49.157099 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.157222 kubelet[2675]: E1108 00:38:49.157115 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.157763 kubelet[2675]: E1108 00:38:49.157635 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.157763 kubelet[2675]: W1108 00:38:49.157648 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.157763 kubelet[2675]: E1108 00:38:49.157657 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.157997 kubelet[2675]: E1108 00:38:49.157985 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.158043 kubelet[2675]: W1108 00:38:49.158034 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.158096 kubelet[2675]: E1108 00:38:49.158085 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.158367 kubelet[2675]: E1108 00:38:49.158355 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.158506 kubelet[2675]: W1108 00:38:49.158424 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.158506 kubelet[2675]: E1108 00:38:49.158436 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.158690 kubelet[2675]: E1108 00:38:49.158680 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.158741 kubelet[2675]: W1108 00:38:49.158732 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.158790 kubelet[2675]: E1108 00:38:49.158780 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.159093 kubelet[2675]: E1108 00:38:49.159001 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.159093 kubelet[2675]: W1108 00:38:49.159010 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.159093 kubelet[2675]: E1108 00:38:49.159020 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.159404 kubelet[2675]: E1108 00:38:49.159393 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.159537 kubelet[2675]: W1108 00:38:49.159445 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.159537 kubelet[2675]: E1108 00:38:49.159457 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.159788 kubelet[2675]: E1108 00:38:49.159776 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.159912 kubelet[2675]: W1108 00:38:49.159830 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.159912 kubelet[2675]: E1108 00:38:49.159843 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.160251 kubelet[2675]: E1108 00:38:49.160178 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.160251 kubelet[2675]: W1108 00:38:49.160191 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.160251 kubelet[2675]: E1108 00:38:49.160200 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.166174 kubelet[2675]: E1108 00:38:49.165356 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.166174 kubelet[2675]: W1108 00:38:49.165394 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.166174 kubelet[2675]: E1108 00:38:49.165427 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.166540 kubelet[2675]: E1108 00:38:49.166506 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.166540 kubelet[2675]: W1108 00:38:49.166530 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.166609 kubelet[2675]: E1108 00:38:49.166559 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.167664 kubelet[2675]: E1108 00:38:49.167634 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.167664 kubelet[2675]: W1108 00:38:49.167656 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.167739 kubelet[2675]: E1108 00:38:49.167693 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.167998 kubelet[2675]: E1108 00:38:49.167965 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.167998 kubelet[2675]: W1108 00:38:49.167987 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.168067 kubelet[2675]: E1108 00:38:49.168019 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.170214 kubelet[2675]: E1108 00:38:49.168881 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.170214 kubelet[2675]: W1108 00:38:49.168896 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.170214 kubelet[2675]: E1108 00:38:49.168924 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.170214 kubelet[2675]: E1108 00:38:49.169267 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.170214 kubelet[2675]: W1108 00:38:49.169276 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.170214 kubelet[2675]: E1108 00:38:49.170167 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.173155 kubelet[2675]: E1108 00:38:49.170962 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.173155 kubelet[2675]: W1108 00:38:49.171168 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.173155 kubelet[2675]: E1108 00:38:49.171279 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.173155 kubelet[2675]: E1108 00:38:49.172569 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.173155 kubelet[2675]: W1108 00:38:49.172579 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.173155 kubelet[2675]: E1108 00:38:49.172603 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.173155 kubelet[2675]: E1108 00:38:49.173041 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.173155 kubelet[2675]: W1108 00:38:49.173051 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.173349 kubelet[2675]: E1108 00:38:49.173070 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181163 kubelet[2675]: E1108 00:38:49.175189 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181163 kubelet[2675]: W1108 00:38:49.175209 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181163 kubelet[2675]: E1108 00:38:49.175226 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181163 kubelet[2675]: E1108 00:38:49.175661 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181163 kubelet[2675]: W1108 00:38:49.175672 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181163 kubelet[2675]: E1108 00:38:49.175731 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181163 kubelet[2675]: E1108 00:38:49.176226 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181163 kubelet[2675]: W1108 00:38:49.176239 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181163 kubelet[2675]: E1108 00:38:49.176310 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181163 kubelet[2675]: E1108 00:38:49.176702 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181403 kubelet[2675]: W1108 00:38:49.176716 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181403 kubelet[2675]: E1108 00:38:49.177432 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181403 kubelet[2675]: E1108 00:38:49.177775 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181403 kubelet[2675]: W1108 00:38:49.177786 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181403 kubelet[2675]: E1108 00:38:49.177803 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181403 kubelet[2675]: E1108 00:38:49.179552 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181403 kubelet[2675]: W1108 00:38:49.179565 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181403 kubelet[2675]: E1108 00:38:49.179832 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181403 kubelet[2675]: E1108 00:38:49.180376 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181403 kubelet[2675]: W1108 00:38:49.180388 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181587 kubelet[2675]: E1108 00:38:49.180406 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.181587 kubelet[2675]: E1108 00:38:49.181387 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.181587 kubelet[2675]: W1108 00:38:49.181397 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.181587 kubelet[2675]: E1108 00:38:49.181427 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.184156 kubelet[2675]: E1108 00:38:49.181955 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:38:49.184156 kubelet[2675]: W1108 00:38:49.182007 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:38:49.184156 kubelet[2675]: E1108 00:38:49.182021 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:38:49.572294 containerd[1577]: time="2025-11-08T00:38:49.572240809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:49.573519 containerd[1577]: time="2025-11-08T00:38:49.573405080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:38:49.574074 containerd[1577]: time="2025-11-08T00:38:49.574032841Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:49.576451 containerd[1577]: time="2025-11-08T00:38:49.576389212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:49.577684 containerd[1577]: time="2025-11-08T00:38:49.577032373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 688.511776ms" Nov 8 00:38:49.577684 containerd[1577]: time="2025-11-08T00:38:49.577068673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:38:49.579667 containerd[1577]: time="2025-11-08T00:38:49.579508994Z" level=info msg="CreateContainer within sandbox \"74610eb876eb92946674a9a45a81ae5ec836ba558ed8f9010bf93791348f7f21\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:38:49.595682 containerd[1577]: time="2025-11-08T00:38:49.595654215Z" level=info msg="CreateContainer within sandbox \"74610eb876eb92946674a9a45a81ae5ec836ba558ed8f9010bf93791348f7f21\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ba94ce168c70c029ea5f88a9c4602d95121309c588297c41972279c3ab023d19\"" Nov 8 00:38:49.597383 containerd[1577]: time="2025-11-08T00:38:49.597221076Z" level=info msg="StartContainer for \"ba94ce168c70c029ea5f88a9c4602d95121309c588297c41972279c3ab023d19\"" Nov 8 00:38:49.676689 containerd[1577]: time="2025-11-08T00:38:49.676324630Z" level=info msg="StartContainer for \"ba94ce168c70c029ea5f88a9c4602d95121309c588297c41972279c3ab023d19\" returns successfully" Nov 8 00:38:49.764652 containerd[1577]: time="2025-11-08T00:38:49.764011791Z" level=info msg="shim disconnected" id=ba94ce168c70c029ea5f88a9c4602d95121309c588297c41972279c3ab023d19 namespace=k8s.io Nov 8 00:38:49.764652 containerd[1577]: time="2025-11-08T00:38:49.764091961Z" level=warning msg="cleaning up after shim disconnected" id=ba94ce168c70c029ea5f88a9c4602d95121309c588297c41972279c3ab023d19 namespace=k8s.io Nov 8 00:38:49.764652 containerd[1577]: time="2025-11-08T00:38:49.764116711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:38:50.039406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba94ce168c70c029ea5f88a9c4602d95121309c588297c41972279c3ab023d19-rootfs.mount: Deactivated successfully. Nov 8 00:38:50.119868 kubelet[2675]: I1108 00:38:50.119751 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:38:50.120500 kubelet[2675]: E1108 00:38:50.120101 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:50.120675 kubelet[2675]: E1108 00:38:50.120655 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:50.122311 containerd[1577]: time="2025-11-08T00:38:50.122276982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:38:50.145182 kubelet[2675]: I1108 00:38:50.144576 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79ff6687b-ftjsf" podStartSLOduration=2.581119203 podStartE2EDuration="4.144556698s" podCreationTimestamp="2025-11-08 00:38:46 +0000 UTC" firstStartedPulling="2025-11-08 00:38:47.324540383 +0000 UTC m=+19.440010469" lastFinishedPulling="2025-11-08 00:38:48.887977868 +0000 UTC m=+21.003447964" observedRunningTime="2025-11-08 00:38:49.159441017 +0000 UTC m=+21.274911123" watchObservedRunningTime="2025-11-08 00:38:50.144556698 +0000 UTC m=+22.260026784" Nov 8 00:38:50.995523 kubelet[2675]: E1108 00:38:50.995470 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:38:52.046851 containerd[1577]: time="2025-11-08T00:38:52.046811658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:52.047918 containerd[1577]: time="2025-11-08T00:38:52.047852739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:38:52.048718 containerd[1577]: time="2025-11-08T00:38:52.048362679Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:52.050335 containerd[1577]: time="2025-11-08T00:38:52.050313701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:52.051278 containerd[1577]: time="2025-11-08T00:38:52.051252721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.928938209s" Nov 8 00:38:52.051341 containerd[1577]: time="2025-11-08T00:38:52.051282381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:38:52.053863 containerd[1577]: time="2025-11-08T00:38:52.053699584Z" level=info msg="CreateContainer within sandbox \"74610eb876eb92946674a9a45a81ae5ec836ba558ed8f9010bf93791348f7f21\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:38:52.066884 containerd[1577]: time="2025-11-08T00:38:52.066850055Z" level=info msg="CreateContainer within sandbox \"74610eb876eb92946674a9a45a81ae5ec836ba558ed8f9010bf93791348f7f21\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7b4d2b0d68672693f2359da53db2fc9efc67bc3c792118f77ac7f3c99a7fa698\"" Nov 8 00:38:52.067281 containerd[1577]: time="2025-11-08T00:38:52.067233585Z" level=info msg="StartContainer for \"7b4d2b0d68672693f2359da53db2fc9efc67bc3c792118f77ac7f3c99a7fa698\"" Nov 8 00:38:52.142350 containerd[1577]: time="2025-11-08T00:38:52.142309267Z" level=info msg="StartContainer for \"7b4d2b0d68672693f2359da53db2fc9efc67bc3c792118f77ac7f3c99a7fa698\" returns successfully" Nov 8 00:38:52.655675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b4d2b0d68672693f2359da53db2fc9efc67bc3c792118f77ac7f3c99a7fa698-rootfs.mount: Deactivated successfully. Nov 8 00:38:52.686413 kubelet[2675]: I1108 00:38:52.685756 2675 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:38:52.718584 containerd[1577]: time="2025-11-08T00:38:52.718444484Z" level=info msg="shim disconnected" id=7b4d2b0d68672693f2359da53db2fc9efc67bc3c792118f77ac7f3c99a7fa698 namespace=k8s.io Nov 8 00:38:52.718584 containerd[1577]: time="2025-11-08T00:38:52.718499834Z" level=warning msg="cleaning up after shim disconnected" id=7b4d2b0d68672693f2359da53db2fc9efc67bc3c792118f77ac7f3c99a7fa698 namespace=k8s.io Nov 8 00:38:52.718584 containerd[1577]: time="2025-11-08T00:38:52.718510124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:38:52.765384 containerd[1577]: time="2025-11-08T00:38:52.764564072Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:38:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:38:52.798854 kubelet[2675]: I1108 00:38:52.798797 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/945f0c5d-79d5-427e-a435-dd67b16eeed0-calico-apiserver-certs\") pod \"calico-apiserver-69649455c-fj7f9\" (UID: \"945f0c5d-79d5-427e-a435-dd67b16eeed0\") " pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" Nov 8 00:38:52.798854 kubelet[2675]: I1108 00:38:52.798849 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wh8\" (UniqueName: \"kubernetes.io/projected/3fba8684-7c7b-4843-be36-35310c939b73-kube-api-access-z7wh8\") pod \"whisker-66f57989b9-sfrdn\" (UID: \"3fba8684-7c7b-4843-be36-35310c939b73\") " pod="calico-system/whisker-66f57989b9-sfrdn" Nov 8 00:38:52.799036 kubelet[2675]: I1108 00:38:52.798877 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxqt\" (UniqueName: \"kubernetes.io/projected/548b6544-42df-4869-bfa7-bb27245d2cb1-kube-api-access-jqxqt\") pod \"calico-apiserver-69649455c-qzjh9\" (UID: \"548b6544-42df-4869-bfa7-bb27245d2cb1\") " pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" Nov 8 00:38:52.799036 kubelet[2675]: I1108 00:38:52.798924 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39e4cff7-6b76-45e5-9e76-44418507cde4-tigera-ca-bundle\") pod \"calico-kube-controllers-74b6646fb4-vqzk2\" (UID: \"39e4cff7-6b76-45e5-9e76-44418507cde4\") " pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" Nov 8 00:38:52.799036 kubelet[2675]: I1108 00:38:52.798945 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fcf74c-bacb-403a-b9d1-404b70dbc1f8-goldmane-ca-bundle\") pod \"goldmane-666569f655-gxx8g\" (UID: \"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8\") " pod="calico-system/goldmane-666569f655-gxx8g" Nov 8 00:38:52.799036 kubelet[2675]: I1108 00:38:52.798965 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtp5v\" (UniqueName: \"kubernetes.io/projected/c4fcf74c-bacb-403a-b9d1-404b70dbc1f8-kube-api-access-gtp5v\") pod \"goldmane-666569f655-gxx8g\" (UID: \"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8\") " pod="calico-system/goldmane-666569f655-gxx8g" Nov 8 00:38:52.799036 kubelet[2675]: I1108 00:38:52.798983 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3fba8684-7c7b-4843-be36-35310c939b73-whisker-backend-key-pair\") pod \"whisker-66f57989b9-sfrdn\" (UID: \"3fba8684-7c7b-4843-be36-35310c939b73\") " pod="calico-system/whisker-66f57989b9-sfrdn" Nov 8 00:38:52.799427 kubelet[2675]: I1108 00:38:52.799002 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4fcf74c-bacb-403a-b9d1-404b70dbc1f8-config\") pod \"goldmane-666569f655-gxx8g\" (UID: \"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8\") " pod="calico-system/goldmane-666569f655-gxx8g" Nov 8 00:38:52.799427 kubelet[2675]: I1108 00:38:52.799021 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c4fcf74c-bacb-403a-b9d1-404b70dbc1f8-goldmane-key-pair\") pod \"goldmane-666569f655-gxx8g\" (UID: \"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8\") " pod="calico-system/goldmane-666569f655-gxx8g" Nov 8 00:38:52.799427 kubelet[2675]: I1108 00:38:52.799039 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdt4\" (UniqueName: \"kubernetes.io/projected/be1ef78a-3d23-4e67-a9e4-62513d5dd793-kube-api-access-lqdt4\") pod \"coredns-668d6bf9bc-zwjrp\" (UID: \"be1ef78a-3d23-4e67-a9e4-62513d5dd793\") " pod="kube-system/coredns-668d6bf9bc-zwjrp" Nov 8 00:38:52.799427 kubelet[2675]: I1108 00:38:52.799057 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f23c5b5-4cfa-46d8-aaba-cb061e55e03e-config-volume\") pod \"coredns-668d6bf9bc-sffch\" (UID: \"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e\") " pod="kube-system/coredns-668d6bf9bc-sffch" Nov 8 00:38:52.799427 kubelet[2675]: I1108 00:38:52.799075 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fba8684-7c7b-4843-be36-35310c939b73-whisker-ca-bundle\") pod \"whisker-66f57989b9-sfrdn\" (UID: \"3fba8684-7c7b-4843-be36-35310c939b73\") " pod="calico-system/whisker-66f57989b9-sfrdn" Nov 8 00:38:52.800150 kubelet[2675]: I1108 00:38:52.799094 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1ef78a-3d23-4e67-a9e4-62513d5dd793-config-volume\") pod \"coredns-668d6bf9bc-zwjrp\" (UID: \"be1ef78a-3d23-4e67-a9e4-62513d5dd793\") " pod="kube-system/coredns-668d6bf9bc-zwjrp" Nov 8 00:38:52.800150 kubelet[2675]: I1108 00:38:52.799112 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mcn\" (UniqueName: \"kubernetes.io/projected/945f0c5d-79d5-427e-a435-dd67b16eeed0-kube-api-access-c5mcn\") pod \"calico-apiserver-69649455c-fj7f9\" (UID: \"945f0c5d-79d5-427e-a435-dd67b16eeed0\") " pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" Nov 8 00:38:52.802044 kubelet[2675]: I1108 00:38:52.802020 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9t5h\" (UniqueName: \"kubernetes.io/projected/9f23c5b5-4cfa-46d8-aaba-cb061e55e03e-kube-api-access-j9t5h\") pod \"coredns-668d6bf9bc-sffch\" (UID: \"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e\") " pod="kube-system/coredns-668d6bf9bc-sffch" Nov 8 00:38:52.802096 kubelet[2675]: I1108 00:38:52.802057 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/548b6544-42df-4869-bfa7-bb27245d2cb1-calico-apiserver-certs\") pod \"calico-apiserver-69649455c-qzjh9\" (UID: \"548b6544-42df-4869-bfa7-bb27245d2cb1\") " pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" Nov 8 00:38:52.802151 kubelet[2675]: I1108 00:38:52.802114 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6j5t\" (UniqueName: \"kubernetes.io/projected/39e4cff7-6b76-45e5-9e76-44418507cde4-kube-api-access-k6j5t\") pod \"calico-kube-controllers-74b6646fb4-vqzk2\" (UID: \"39e4cff7-6b76-45e5-9e76-44418507cde4\") " pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" Nov 8 00:38:52.996949 containerd[1577]: time="2025-11-08T00:38:52.996633404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mdrsj,Uid:3e31263c-8cf9-4e4b-a04e-7c52af3f73c1,Namespace:calico-system,Attempt:0,}" Nov 8 00:38:53.054568 containerd[1577]: time="2025-11-08T00:38:53.054501384Z" level=error msg="Failed to destroy network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.055480 containerd[1577]: time="2025-11-08T00:38:53.054879175Z" level=error msg="encountered an error cleaning up failed sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.055480 containerd[1577]: time="2025-11-08T00:38:53.054919245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mdrsj,Uid:3e31263c-8cf9-4e4b-a04e-7c52af3f73c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.055547 kubelet[2675]: E1108 00:38:53.055179 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.055547 kubelet[2675]: E1108 00:38:53.055459 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mdrsj" Nov 8 00:38:53.055547 kubelet[2675]: E1108 00:38:53.055481 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mdrsj" Nov 8 00:38:53.055633 kubelet[2675]: E1108 00:38:53.055525 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:38:53.069713 kubelet[2675]: E1108 00:38:53.069582 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:53.071741 containerd[1577]: time="2025-11-08T00:38:53.071648440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b6646fb4-vqzk2,Uid:39e4cff7-6b76-45e5-9e76-44418507cde4,Namespace:calico-system,Attempt:0,}" Nov 8 00:38:53.072181 containerd[1577]: time="2025-11-08T00:38:53.071838900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sffch,Uid:9f23c5b5-4cfa-46d8-aaba-cb061e55e03e,Namespace:kube-system,Attempt:0,}" Nov 8 00:38:53.074305 kubelet[2675]: E1108 00:38:53.073323 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:53.074656 containerd[1577]: time="2025-11-08T00:38:53.074622032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwjrp,Uid:be1ef78a-3d23-4e67-a9e4-62513d5dd793,Namespace:kube-system,Attempt:0,}" Nov 8 00:38:53.080669 containerd[1577]: time="2025-11-08T00:38:53.079390766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-fj7f9,Uid:945f0c5d-79d5-427e-a435-dd67b16eeed0,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:38:53.086458 containerd[1577]: time="2025-11-08T00:38:53.086398792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f57989b9-sfrdn,Uid:3fba8684-7c7b-4843-be36-35310c939b73,Namespace:calico-system,Attempt:0,}" Nov 8 00:38:53.093009 containerd[1577]: time="2025-11-08T00:38:53.092878398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-qzjh9,Uid:548b6544-42df-4869-bfa7-bb27245d2cb1,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:38:53.093425 containerd[1577]: time="2025-11-08T00:38:53.093278558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxx8g,Uid:c4fcf74c-bacb-403a-b9d1-404b70dbc1f8,Namespace:calico-system,Attempt:0,}" Nov 8 00:38:53.149429 kubelet[2675]: E1108 00:38:53.149394 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:53.164568 kubelet[2675]: I1108 00:38:53.164541 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:38:53.165057 containerd[1577]: time="2025-11-08T00:38:53.165022591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:38:53.165834 containerd[1577]: time="2025-11-08T00:38:53.165802851Z" level=info msg="StopPodSandbox for \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\"" Nov 8 00:38:53.170219 containerd[1577]: time="2025-11-08T00:38:53.165958941Z" level=info msg="Ensure that sandbox 721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96 in task-service has been cleanup successfully" Nov 8 00:38:53.274794 containerd[1577]: time="2025-11-08T00:38:53.274689026Z" level=error msg="Failed to destroy network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.275499 containerd[1577]: time="2025-11-08T00:38:53.275472416Z" level=error msg="encountered an error cleaning up failed sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.275953 containerd[1577]: time="2025-11-08T00:38:53.275929117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b6646fb4-vqzk2,Uid:39e4cff7-6b76-45e5-9e76-44418507cde4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.277359 kubelet[2675]: E1108 00:38:53.277277 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.277439 kubelet[2675]: E1108 00:38:53.277418 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" Nov 8 00:38:53.277466 kubelet[2675]: E1108 00:38:53.277447 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" Nov 8 00:38:53.277575 kubelet[2675]: E1108 00:38:53.277535 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74b6646fb4-vqzk2_calico-system(39e4cff7-6b76-45e5-9e76-44418507cde4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74b6646fb4-vqzk2_calico-system(39e4cff7-6b76-45e5-9e76-44418507cde4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:38:53.314805 containerd[1577]: time="2025-11-08T00:38:53.314571671Z" level=error msg="Failed to destroy network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.316774 containerd[1577]: time="2025-11-08T00:38:53.316428562Z" level=error msg="encountered an error cleaning up failed sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.316774 containerd[1577]: time="2025-11-08T00:38:53.316643022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-qzjh9,Uid:548b6544-42df-4869-bfa7-bb27245d2cb1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.318174 kubelet[2675]: E1108 00:38:53.317022 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.318174 kubelet[2675]: E1108 00:38:53.317351 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" Nov 8 00:38:53.318174 kubelet[2675]: E1108 00:38:53.317374 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" Nov 8 00:38:53.318488 kubelet[2675]: E1108 00:38:53.317417 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69649455c-qzjh9_calico-apiserver(548b6544-42df-4869-bfa7-bb27245d2cb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69649455c-qzjh9_calico-apiserver(548b6544-42df-4869-bfa7-bb27245d2cb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:38:53.345412 containerd[1577]: time="2025-11-08T00:38:53.345167307Z" level=error msg="StopPodSandbox for \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\" failed" error="failed to destroy network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.345566 kubelet[2675]: E1108 00:38:53.345518 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:38:53.345646 kubelet[2675]: E1108 00:38:53.345585 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96"} Nov 8 00:38:53.345646 kubelet[2675]: E1108 00:38:53.345639 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:53.345856 kubelet[2675]: E1108 00:38:53.345661 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:38:53.382452 containerd[1577]: time="2025-11-08T00:38:53.382415120Z" level=error msg="Failed to destroy network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.383021 containerd[1577]: time="2025-11-08T00:38:53.382937530Z" level=error msg="encountered an error cleaning up failed sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.383021 containerd[1577]: time="2025-11-08T00:38:53.382981400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwjrp,Uid:be1ef78a-3d23-4e67-a9e4-62513d5dd793,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.383537 kubelet[2675]: E1108 00:38:53.383331 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.383537 kubelet[2675]: E1108 00:38:53.383391 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwjrp" Nov 8 00:38:53.383537 kubelet[2675]: E1108 00:38:53.383411 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwjrp" Nov 8 00:38:53.385254 kubelet[2675]: E1108 00:38:53.383475 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwjrp_kube-system(be1ef78a-3d23-4e67-a9e4-62513d5dd793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwjrp_kube-system(be1ef78a-3d23-4e67-a9e4-62513d5dd793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwjrp" podUID="be1ef78a-3d23-4e67-a9e4-62513d5dd793" Nov 8 00:38:53.387481 containerd[1577]: time="2025-11-08T00:38:53.387342744Z" level=error msg="Failed to destroy network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.387752 containerd[1577]: time="2025-11-08T00:38:53.387679744Z" level=error msg="encountered an error cleaning up failed sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.387752 containerd[1577]: time="2025-11-08T00:38:53.387725374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-fj7f9,Uid:945f0c5d-79d5-427e-a435-dd67b16eeed0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.388348 kubelet[2675]: E1108 00:38:53.387898 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.388348 kubelet[2675]: E1108 00:38:53.387939 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" Nov 8 00:38:53.388348 kubelet[2675]: E1108 00:38:53.387957 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" Nov 8 00:38:53.388605 kubelet[2675]: E1108 00:38:53.387987 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69649455c-fj7f9_calico-apiserver(945f0c5d-79d5-427e-a435-dd67b16eeed0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69649455c-fj7f9_calico-apiserver(945f0c5d-79d5-427e-a435-dd67b16eeed0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:38:53.401877 containerd[1577]: time="2025-11-08T00:38:53.401848607Z" level=error msg="Failed to destroy network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.403570 containerd[1577]: time="2025-11-08T00:38:53.403504697Z" level=error msg="encountered an error cleaning up failed sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.403635 containerd[1577]: time="2025-11-08T00:38:53.403590128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sffch,Uid:9f23c5b5-4cfa-46d8-aaba-cb061e55e03e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.404009 kubelet[2675]: E1108 00:38:53.403936 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.404489 kubelet[2675]: E1108 00:38:53.404021 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sffch" Nov 8 00:38:53.404489 kubelet[2675]: E1108 00:38:53.404081 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sffch" Nov 8 00:38:53.407017 kubelet[2675]: E1108 00:38:53.406553 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sffch_kube-system(9f23c5b5-4cfa-46d8-aaba-cb061e55e03e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sffch_kube-system(9f23c5b5-4cfa-46d8-aaba-cb061e55e03e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sffch" podUID="9f23c5b5-4cfa-46d8-aaba-cb061e55e03e" Nov 8 00:38:53.408208 containerd[1577]: time="2025-11-08T00:38:53.407627372Z" level=error msg="Failed to destroy network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.408208 containerd[1577]: time="2025-11-08T00:38:53.408166922Z" level=error msg="encountered an error cleaning up failed sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.408319 containerd[1577]: time="2025-11-08T00:38:53.408236392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxx8g,Uid:c4fcf74c-bacb-403a-b9d1-404b70dbc1f8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.408588 kubelet[2675]: E1108 00:38:53.408527 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.408672 kubelet[2675]: E1108 00:38:53.408586 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gxx8g" Nov 8 00:38:53.408672 kubelet[2675]: E1108 00:38:53.408606 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gxx8g" Nov 8 00:38:53.408672 kubelet[2675]: E1108 00:38:53.408634 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gxx8g_calico-system(c4fcf74c-bacb-403a-b9d1-404b70dbc1f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gxx8g_calico-system(c4fcf74c-bacb-403a-b9d1-404b70dbc1f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:38:53.419047 containerd[1577]: time="2025-11-08T00:38:53.418993412Z" level=error msg="Failed to destroy network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.419355 containerd[1577]: time="2025-11-08T00:38:53.419327772Z" level=error msg="encountered an error cleaning up failed sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.419394 containerd[1577]: time="2025-11-08T00:38:53.419368981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f57989b9-sfrdn,Uid:3fba8684-7c7b-4843-be36-35310c939b73,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.419621 kubelet[2675]: E1108 00:38:53.419561 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:53.419661 kubelet[2675]: E1108 00:38:53.419635 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66f57989b9-sfrdn" Nov 8 00:38:53.419685 kubelet[2675]: E1108 00:38:53.419669 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66f57989b9-sfrdn" Nov 8 00:38:53.419737 kubelet[2675]: E1108 00:38:53.419710 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66f57989b9-sfrdn_calico-system(3fba8684-7c7b-4843-be36-35310c939b73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66f57989b9-sfrdn_calico-system(3fba8684-7c7b-4843-be36-35310c939b73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66f57989b9-sfrdn" podUID="3fba8684-7c7b-4843-be36-35310c939b73" Nov 8 00:38:54.069733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce-shm.mount: Deactivated successfully. Nov 8 00:38:54.070034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340-shm.mount: Deactivated successfully. Nov 8 00:38:54.070244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef-shm.mount: Deactivated successfully. Nov 8 00:38:54.170169 kubelet[2675]: I1108 00:38:54.169945 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:38:54.171332 containerd[1577]: time="2025-11-08T00:38:54.170885852Z" level=info msg="StopPodSandbox for \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\"" Nov 8 00:38:54.171332 containerd[1577]: time="2025-11-08T00:38:54.171037713Z" level=info msg="Ensure that sandbox 1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5 in task-service has been cleanup successfully" Nov 8 00:38:54.174873 kubelet[2675]: I1108 00:38:54.174471 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:38:54.175382 containerd[1577]: time="2025-11-08T00:38:54.174987606Z" level=info msg="StopPodSandbox for \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\"" Nov 8 00:38:54.175382 containerd[1577]: time="2025-11-08T00:38:54.175202336Z" level=info msg="Ensure that sandbox f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf in task-service has been cleanup successfully" Nov 8 00:38:54.177677 kubelet[2675]: I1108 00:38:54.177646 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:38:54.180548 containerd[1577]: time="2025-11-08T00:38:54.180526461Z" level=info msg="StopPodSandbox for \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\"" Nov 8 00:38:54.181184 containerd[1577]: time="2025-11-08T00:38:54.181060482Z" level=info msg="Ensure that sandbox d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb in task-service has been cleanup successfully" Nov 8 00:38:54.190964 containerd[1577]: time="2025-11-08T00:38:54.190930020Z" level=info msg="StopPodSandbox for \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\"" Nov 8 00:38:54.191188 kubelet[2675]: I1108 00:38:54.191157 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:38:54.191384 containerd[1577]: time="2025-11-08T00:38:54.191362141Z" level=info msg="Ensure that sandbox 5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce in task-service has been cleanup successfully" Nov 8 00:38:54.196350 kubelet[2675]: I1108 00:38:54.195263 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:38:54.200512 containerd[1577]: time="2025-11-08T00:38:54.197243956Z" level=info msg="StopPodSandbox for \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\"" Nov 8 00:38:54.201018 containerd[1577]: time="2025-11-08T00:38:54.200968290Z" level=info msg="Ensure that sandbox fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340 in task-service has been cleanup successfully" Nov 8 00:38:54.215624 kubelet[2675]: I1108 00:38:54.215585 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:38:54.220704 containerd[1577]: time="2025-11-08T00:38:54.220682567Z" level=info msg="StopPodSandbox for \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\"" Nov 8 00:38:54.222760 kubelet[2675]: I1108 00:38:54.222663 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:38:54.222880 containerd[1577]: time="2025-11-08T00:38:54.222858519Z" level=info msg="Ensure that sandbox 2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880 in task-service has been cleanup successfully" Nov 8 00:38:54.224499 containerd[1577]: time="2025-11-08T00:38:54.223826341Z" level=info msg="StopPodSandbox for \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\"" Nov 8 00:38:54.224694 containerd[1577]: time="2025-11-08T00:38:54.224676381Z" level=info msg="Ensure that sandbox 4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef in task-service has been cleanup successfully" Nov 8 00:38:54.318635 containerd[1577]: time="2025-11-08T00:38:54.318563047Z" level=error msg="StopPodSandbox for \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\" failed" error="failed to destroy network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:54.319680 containerd[1577]: time="2025-11-08T00:38:54.319640958Z" level=error msg="StopPodSandbox for \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\" failed" error="failed to destroy network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:54.319959 kubelet[2675]: E1108 00:38:54.319862 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:38:54.320028 kubelet[2675]: E1108 00:38:54.319967 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5"} Nov 8 00:38:54.320028 kubelet[2675]: E1108 00:38:54.320015 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fba8684-7c7b-4843-be36-35310c939b73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:54.320247 kubelet[2675]: E1108 00:38:54.320060 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fba8684-7c7b-4843-be36-35310c939b73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66f57989b9-sfrdn" podUID="3fba8684-7c7b-4843-be36-35310c939b73" Nov 8 00:38:54.323923 kubelet[2675]: E1108 00:38:54.321952 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:38:54.323923 kubelet[2675]: E1108 00:38:54.322026 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf"} Nov 8 00:38:54.323923 kubelet[2675]: E1108 00:38:54.322056 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"945f0c5d-79d5-427e-a435-dd67b16eeed0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:54.323923 kubelet[2675]: E1108 00:38:54.322088 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"945f0c5d-79d5-427e-a435-dd67b16eeed0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:38:54.324505 containerd[1577]: time="2025-11-08T00:38:54.324467852Z" level=error msg="StopPodSandbox for \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\" failed" error="failed to destroy network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:54.324890 kubelet[2675]: E1108 00:38:54.324827 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:38:54.324890 kubelet[2675]: E1108 00:38:54.324872 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb"} Nov 8 00:38:54.324958 kubelet[2675]: E1108 00:38:54.324897 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"548b6544-42df-4869-bfa7-bb27245d2cb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:54.324958 kubelet[2675]: E1108 00:38:54.324924 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"548b6544-42df-4869-bfa7-bb27245d2cb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:38:54.326556 containerd[1577]: time="2025-11-08T00:38:54.325999623Z" level=error msg="StopPodSandbox for \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\" failed" error="failed to destroy network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:54.326915 kubelet[2675]: E1108 00:38:54.326271 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:38:54.326915 kubelet[2675]: E1108 00:38:54.326300 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce"} Nov 8 00:38:54.326915 kubelet[2675]: E1108 00:38:54.326374 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be1ef78a-3d23-4e67-a9e4-62513d5dd793\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:54.326915 kubelet[2675]: E1108 00:38:54.326395 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be1ef78a-3d23-4e67-a9e4-62513d5dd793\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwjrp" podUID="be1ef78a-3d23-4e67-a9e4-62513d5dd793" Nov 8 00:38:54.353931 containerd[1577]: time="2025-11-08T00:38:54.353384188Z" level=error msg="StopPodSandbox for \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\" failed" error="failed to destroy network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:54.354008 kubelet[2675]: E1108 00:38:54.353608 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:38:54.354008 kubelet[2675]: E1108 00:38:54.353675 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef"} Nov 8 00:38:54.354008 kubelet[2675]: E1108 00:38:54.353712 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39e4cff7-6b76-45e5-9e76-44418507cde4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:54.354008 kubelet[2675]: E1108 00:38:54.353758 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39e4cff7-6b76-45e5-9e76-44418507cde4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:38:54.354663 containerd[1577]: time="2025-11-08T00:38:54.354515999Z" level=error msg="StopPodSandbox for \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\" failed" error="failed to destroy network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:54.354712 containerd[1577]: time="2025-11-08T00:38:54.354624319Z" level=error msg="StopPodSandbox for \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\" failed" error="failed to destroy network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:38:54.354889 kubelet[2675]: E1108 00:38:54.354814 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:38:54.354889 kubelet[2675]: E1108 00:38:54.354864 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880"} Nov 8 00:38:54.354889 kubelet[2675]: E1108 00:38:54.354888 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:54.357297 kubelet[2675]: E1108 00:38:54.354910 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:38:54.357297 kubelet[2675]: E1108 00:38:54.354937 2675 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:38:54.357297 kubelet[2675]: E1108 00:38:54.354955 2675 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340"} Nov 8 00:38:54.357297 kubelet[2675]: E1108 00:38:54.354977 2675 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:38:54.357487 kubelet[2675]: E1108 00:38:54.354994 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sffch" podUID="9f23c5b5-4cfa-46d8-aaba-cb061e55e03e" Nov 8 00:38:57.277230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567071610.mount: Deactivated successfully. Nov 8 00:38:57.306662 containerd[1577]: time="2025-11-08T00:38:57.306628783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:57.307570 containerd[1577]: time="2025-11-08T00:38:57.307533624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:38:57.308092 containerd[1577]: time="2025-11-08T00:38:57.308047775Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:57.310791 containerd[1577]: time="2025-11-08T00:38:57.309582206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:38:57.310791 containerd[1577]: time="2025-11-08T00:38:57.310687897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.145623316s" Nov 8 00:38:57.310791 containerd[1577]: time="2025-11-08T00:38:57.310712237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:38:57.328389 containerd[1577]: time="2025-11-08T00:38:57.328362265Z" level=info msg="CreateContainer within sandbox \"74610eb876eb92946674a9a45a81ae5ec836ba558ed8f9010bf93791348f7f21\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:38:57.350155 containerd[1577]: time="2025-11-08T00:38:57.349389647Z" level=info msg="CreateContainer within sandbox \"74610eb876eb92946674a9a45a81ae5ec836ba558ed8f9010bf93791348f7f21\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f1749a7a08cb27c4c7ae052de1c7290d5317044ce3979c7386975e4123bbadb8\"" Nov 8 00:38:57.350848 containerd[1577]: time="2025-11-08T00:38:57.350821118Z" level=info msg="StartContainer for \"f1749a7a08cb27c4c7ae052de1c7290d5317044ce3979c7386975e4123bbadb8\"" Nov 8 00:38:57.351727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029888187.mount: Deactivated successfully. Nov 8 00:38:57.430285 containerd[1577]: time="2025-11-08T00:38:57.430225258Z" level=info msg="StartContainer for \"f1749a7a08cb27c4c7ae052de1c7290d5317044ce3979c7386975e4123bbadb8\" returns successfully" Nov 8 00:38:57.543118 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:38:57.543356 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:38:57.626181 containerd[1577]: time="2025-11-08T00:38:57.625880297Z" level=info msg="StopPodSandbox for \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\"" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.735 [INFO][3875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.736 [INFO][3875] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" iface="eth0" netns="/var/run/netns/cni-90b5c161-9247-ce36-d82f-4421967982ed" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.737 [INFO][3875] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" iface="eth0" netns="/var/run/netns/cni-90b5c161-9247-ce36-d82f-4421967982ed" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.743 [INFO][3875] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" iface="eth0" netns="/var/run/netns/cni-90b5c161-9247-ce36-d82f-4421967982ed" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.743 [INFO][3875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.743 [INFO][3875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.768 [INFO][3889] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.769 [INFO][3889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.770 [INFO][3889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.781 [WARNING][3889] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.781 [INFO][3889] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.783 [INFO][3889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:38:57.790916 containerd[1577]: 2025-11-08 00:38:57.788 [INFO][3875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:38:57.790916 containerd[1577]: time="2025-11-08T00:38:57.790760964Z" level=info msg="TearDown network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\" successfully" Nov 8 00:38:57.790916 containerd[1577]: time="2025-11-08T00:38:57.790784314Z" level=info msg="StopPodSandbox for \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\" returns successfully" Nov 8 00:38:57.843761 kubelet[2675]: I1108 00:38:57.843711 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3fba8684-7c7b-4843-be36-35310c939b73-whisker-backend-key-pair\") pod \"3fba8684-7c7b-4843-be36-35310c939b73\" (UID: \"3fba8684-7c7b-4843-be36-35310c939b73\") " Nov 8 00:38:57.844441 kubelet[2675]: I1108 00:38:57.843771 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7wh8\" (UniqueName: \"kubernetes.io/projected/3fba8684-7c7b-4843-be36-35310c939b73-kube-api-access-z7wh8\") pod \"3fba8684-7c7b-4843-be36-35310c939b73\" (UID: \"3fba8684-7c7b-4843-be36-35310c939b73\") " Nov 8 00:38:57.844441 kubelet[2675]: I1108 00:38:57.843819 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fba8684-7c7b-4843-be36-35310c939b73-whisker-ca-bundle\") pod \"3fba8684-7c7b-4843-be36-35310c939b73\" (UID: \"3fba8684-7c7b-4843-be36-35310c939b73\") " Nov 8 00:38:57.844441 kubelet[2675]: I1108 00:38:57.844301 2675 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fba8684-7c7b-4843-be36-35310c939b73-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3fba8684-7c7b-4843-be36-35310c939b73" (UID: "3fba8684-7c7b-4843-be36-35310c939b73"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:38:57.852756 kubelet[2675]: I1108 00:38:57.852729 2675 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fba8684-7c7b-4843-be36-35310c939b73-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3fba8684-7c7b-4843-be36-35310c939b73" (UID: "3fba8684-7c7b-4843-be36-35310c939b73"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:38:57.853285 kubelet[2675]: I1108 00:38:57.853258 2675 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fba8684-7c7b-4843-be36-35310c939b73-kube-api-access-z7wh8" (OuterVolumeSpecName: "kube-api-access-z7wh8") pod "3fba8684-7c7b-4843-be36-35310c939b73" (UID: "3fba8684-7c7b-4843-be36-35310c939b73"). InnerVolumeSpecName "kube-api-access-z7wh8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:38:57.944191 kubelet[2675]: I1108 00:38:57.944110 2675 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7wh8\" (UniqueName: \"kubernetes.io/projected/3fba8684-7c7b-4843-be36-35310c939b73-kube-api-access-z7wh8\") on node \"172-239-57-26\" DevicePath \"\"" Nov 8 00:38:57.944302 kubelet[2675]: I1108 00:38:57.944228 2675 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fba8684-7c7b-4843-be36-35310c939b73-whisker-ca-bundle\") on node \"172-239-57-26\" DevicePath \"\"" Nov 8 00:38:57.944302 kubelet[2675]: I1108 00:38:57.944271 2675 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3fba8684-7c7b-4843-be36-35310c939b73-whisker-backend-key-pair\") on node \"172-239-57-26\" DevicePath \"\"" Nov 8 00:38:58.240245 kubelet[2675]: E1108 00:38:58.238704 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:58.273738 kubelet[2675]: I1108 00:38:58.272432 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bbq46" podStartSLOduration=1.391445434 podStartE2EDuration="11.272419419s" podCreationTimestamp="2025-11-08 00:38:47 +0000 UTC" firstStartedPulling="2025-11-08 00:38:47.430472223 +0000 UTC m=+19.545942309" lastFinishedPulling="2025-11-08 00:38:57.311446178 +0000 UTC m=+29.426916294" observedRunningTime="2025-11-08 00:38:58.271034848 +0000 UTC m=+30.386504934" watchObservedRunningTime="2025-11-08 00:38:58.272419419 +0000 UTC m=+30.387889505" Nov 8 00:38:58.283786 systemd[1]: run-netns-cni\x2d90b5c161\x2d9247\x2dce36\x2dd82f\x2d4421967982ed.mount: Deactivated successfully. Nov 8 00:38:58.285327 systemd[1]: var-lib-kubelet-pods-3fba8684\x2d7c7b\x2d4843\x2dbe36\x2d35310c939b73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz7wh8.mount: Deactivated successfully. Nov 8 00:38:58.285465 systemd[1]: var-lib-kubelet-pods-3fba8684\x2d7c7b\x2d4843\x2dbe36\x2d35310c939b73-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:38:58.350423 kubelet[2675]: I1108 00:38:58.347271 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qbjx\" (UniqueName: \"kubernetes.io/projected/12f92cbb-00df-467b-a39b-79b1d77d20a1-kube-api-access-7qbjx\") pod \"whisker-5945b5bfd9-lpcq2\" (UID: \"12f92cbb-00df-467b-a39b-79b1d77d20a1\") " pod="calico-system/whisker-5945b5bfd9-lpcq2" Nov 8 00:38:58.350545 kubelet[2675]: I1108 00:38:58.350518 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/12f92cbb-00df-467b-a39b-79b1d77d20a1-whisker-backend-key-pair\") pod \"whisker-5945b5bfd9-lpcq2\" (UID: \"12f92cbb-00df-467b-a39b-79b1d77d20a1\") " pod="calico-system/whisker-5945b5bfd9-lpcq2" Nov 8 00:38:58.351160 kubelet[2675]: I1108 00:38:58.350615 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f92cbb-00df-467b-a39b-79b1d77d20a1-whisker-ca-bundle\") pod \"whisker-5945b5bfd9-lpcq2\" (UID: \"12f92cbb-00df-467b-a39b-79b1d77d20a1\") " pod="calico-system/whisker-5945b5bfd9-lpcq2" Nov 8 00:38:58.600845 containerd[1577]: time="2025-11-08T00:38:58.600073342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5945b5bfd9-lpcq2,Uid:12f92cbb-00df-467b-a39b-79b1d77d20a1,Namespace:calico-system,Attempt:0,}" Nov 8 00:38:58.722809 systemd-networkd[1239]: caliace35a3ff06: Link UP Nov 8 00:38:58.723262 systemd-networkd[1239]: caliace35a3ff06: Gained carrier Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.634 [INFO][3913] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.644 [INFO][3913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0 whisker-5945b5bfd9- calico-system 12f92cbb-00df-467b-a39b-79b1d77d20a1 877 0 2025-11-08 00:38:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5945b5bfd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-57-26 whisker-5945b5bfd9-lpcq2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliace35a3ff06 [] [] }} ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.644 [INFO][3913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.668 [INFO][3924] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" HandleID="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Workload="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.668 [INFO][3924] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" HandleID="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Workload="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-26", "pod":"whisker-5945b5bfd9-lpcq2", "timestamp":"2025-11-08 00:38:58.668748174 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.668 [INFO][3924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.669 [INFO][3924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.669 [INFO][3924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.674 [INFO][3924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.678 [INFO][3924] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.684 [INFO][3924] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.687 [INFO][3924] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.693 [INFO][3924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.693 [INFO][3924] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.694 [INFO][3924] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6 Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.700 [INFO][3924] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.706 [INFO][3924] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.65/26] block=192.168.31.64/26 handle="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.707 [INFO][3924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.65/26] handle="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" host="172-239-57-26" Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.707 [INFO][3924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:38:58.754246 containerd[1577]: 2025-11-08 00:38:58.707 [INFO][3924] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.65/26] IPv6=[] ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" HandleID="k8s-pod-network.fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Workload="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" Nov 8 00:38:58.755091 containerd[1577]: 2025-11-08 00:38:58.711 [INFO][3913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0", GenerateName:"whisker-5945b5bfd9-", Namespace:"calico-system", SelfLink:"", UID:"12f92cbb-00df-467b-a39b-79b1d77d20a1", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5945b5bfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"whisker-5945b5bfd9-lpcq2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliace35a3ff06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:38:58.755091 containerd[1577]: 2025-11-08 00:38:58.711 [INFO][3913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.65/32] ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" Nov 8 00:38:58.755091 containerd[1577]: 2025-11-08 00:38:58.711 [INFO][3913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliace35a3ff06 ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" Nov 8 00:38:58.755091 containerd[1577]: 2025-11-08 00:38:58.723 [INFO][3913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" Nov 8 00:38:58.755091 containerd[1577]: 2025-11-08 00:38:58.725 [INFO][3913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0", GenerateName:"whisker-5945b5bfd9-", Namespace:"calico-system", SelfLink:"", UID:"12f92cbb-00df-467b-a39b-79b1d77d20a1", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5945b5bfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6", Pod:"whisker-5945b5bfd9-lpcq2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliace35a3ff06", MAC:"e2:42:8e:65:6e:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:38:58.755091 containerd[1577]: 2025-11-08 00:38:58.742 [INFO][3913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6" Namespace="calico-system" Pod="whisker-5945b5bfd9-lpcq2" WorkloadEndpoint="172--239--57--26-k8s-whisker--5945b5bfd9--lpcq2-eth0" Nov 8 00:38:58.777592 containerd[1577]: time="2025-11-08T00:38:58.777485747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:38:58.777922 containerd[1577]: time="2025-11-08T00:38:58.777574907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:38:58.777922 containerd[1577]: time="2025-11-08T00:38:58.777616177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:58.779346 containerd[1577]: time="2025-11-08T00:38:58.779251568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:38:58.848986 containerd[1577]: time="2025-11-08T00:38:58.848678852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5945b5bfd9-lpcq2,Uid:12f92cbb-00df-467b-a39b-79b1d77d20a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc2cdd93dcdd747545d9d24e06b4fbad4709c63e682f5f00eaecb8f0b3a7dbf6\"" Nov 8 00:38:58.852595 containerd[1577]: time="2025-11-08T00:38:58.852112385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:38:59.002331 containerd[1577]: time="2025-11-08T00:38:59.002265671Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:59.004181 containerd[1577]: time="2025-11-08T00:38:59.003610483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:38:59.004181 containerd[1577]: time="2025-11-08T00:38:59.003839733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:38:59.005283 kubelet[2675]: E1108 00:38:59.004461 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:38:59.005283 kubelet[2675]: E1108 00:38:59.004518 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:38:59.013906 kubelet[2675]: E1108 00:38:59.013578 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:446bc2bb53ee4664a662201ea699a9cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:59.018081 containerd[1577]: time="2025-11-08T00:38:59.017759318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:38:59.150219 containerd[1577]: time="2025-11-08T00:38:59.149968410Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:59.151508 containerd[1577]: time="2025-11-08T00:38:59.151448682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:38:59.151654 containerd[1577]: time="2025-11-08T00:38:59.151593542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:38:59.152082 kubelet[2675]: E1108 00:38:59.151933 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:38:59.152082 kubelet[2675]: E1108 00:38:59.152027 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:38:59.152983 kubelet[2675]: E1108 00:38:59.152590 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:59.154917 kubelet[2675]: E1108 00:38:59.154710 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:38:59.242934 kubelet[2675]: I1108 00:38:59.242865 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:38:59.244164 kubelet[2675]: E1108 00:38:59.243413 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:38:59.245633 kubelet[2675]: E1108 00:38:59.245574 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:38:59.998552 kubelet[2675]: I1108 00:38:59.996369 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fba8684-7c7b-4843-be36-35310c939b73" path="/var/lib/kubelet/pods/3fba8684-7c7b-4843-be36-35310c939b73/volumes" Nov 8 00:39:00.205363 systemd-networkd[1239]: caliace35a3ff06: Gained IPv6LL Nov 8 00:39:00.247753 kubelet[2675]: E1108 00:39:00.247280 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:39:04.994354 containerd[1577]: time="2025-11-08T00:39:04.993383783Z" level=info msg="StopPodSandbox for \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\"" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.034 [INFO][4195] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.035 [INFO][4195] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" iface="eth0" netns="/var/run/netns/cni-98ab4e62-e391-38f7-87ad-7b4b1497ad40" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.035 [INFO][4195] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" iface="eth0" netns="/var/run/netns/cni-98ab4e62-e391-38f7-87ad-7b4b1497ad40" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.036 [INFO][4195] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" iface="eth0" netns="/var/run/netns/cni-98ab4e62-e391-38f7-87ad-7b4b1497ad40" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.036 [INFO][4195] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.036 [INFO][4195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.056 [INFO][4202] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.056 [INFO][4202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.056 [INFO][4202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.061 [WARNING][4202] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.061 [INFO][4202] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.063 [INFO][4202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:05.067656 containerd[1577]: 2025-11-08 00:39:05.065 [INFO][4195] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:05.067656 containerd[1577]: time="2025-11-08T00:39:05.067708872Z" level=info msg="TearDown network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\" successfully" Nov 8 00:39:05.067656 containerd[1577]: time="2025-11-08T00:39:05.067734363Z" level=info msg="StopPodSandbox for \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\" returns successfully" Nov 8 00:39:05.069981 containerd[1577]: time="2025-11-08T00:39:05.069666744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b6646fb4-vqzk2,Uid:39e4cff7-6b76-45e5-9e76-44418507cde4,Namespace:calico-system,Attempt:1,}" Nov 8 00:39:05.073520 systemd[1]: run-netns-cni\x2d98ab4e62\x2de391\x2d38f7\x2d87ad\x2d7b4b1497ad40.mount: Deactivated successfully. Nov 8 00:39:05.170677 systemd-networkd[1239]: cali9d1538a245f: Link UP Nov 8 00:39:05.172103 systemd-networkd[1239]: cali9d1538a245f: Gained carrier Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.103 [INFO][4210] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.113 [INFO][4210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0 calico-kube-controllers-74b6646fb4- calico-system 39e4cff7-6b76-45e5-9e76-44418507cde4 915 0 2025-11-08 00:38:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74b6646fb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-57-26 calico-kube-controllers-74b6646fb4-vqzk2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9d1538a245f [] [] }} ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.113 [INFO][4210] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.137 [INFO][4222] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" HandleID="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.138 [INFO][4222] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" HandleID="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-26", "pod":"calico-kube-controllers-74b6646fb4-vqzk2", "timestamp":"2025-11-08 00:39:05.137934967 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.138 [INFO][4222] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.138 [INFO][4222] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.138 [INFO][4222] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.144 [INFO][4222] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.148 [INFO][4222] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.152 [INFO][4222] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.153 [INFO][4222] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.155 [INFO][4222] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.156 [INFO][4222] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.157 [INFO][4222] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9 Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.160 [INFO][4222] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.165 [INFO][4222] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.66/26] block=192.168.31.64/26 handle="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.165 [INFO][4222] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.66/26] handle="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" host="172-239-57-26" Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.165 [INFO][4222] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:05.188478 containerd[1577]: 2025-11-08 00:39:05.165 [INFO][4222] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.66/26] IPv6=[] ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" HandleID="k8s-pod-network.0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.188995 containerd[1577]: 2025-11-08 00:39:05.167 [INFO][4210] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0", GenerateName:"calico-kube-controllers-74b6646fb4-", Namespace:"calico-system", SelfLink:"", UID:"39e4cff7-6b76-45e5-9e76-44418507cde4", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b6646fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"calico-kube-controllers-74b6646fb4-vqzk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d1538a245f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:05.188995 containerd[1577]: 2025-11-08 00:39:05.167 [INFO][4210] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.66/32] ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.188995 containerd[1577]: 2025-11-08 00:39:05.167 [INFO][4210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d1538a245f ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.188995 containerd[1577]: 2025-11-08 00:39:05.169 [INFO][4210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.188995 containerd[1577]: 2025-11-08 00:39:05.171 [INFO][4210] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0", GenerateName:"calico-kube-controllers-74b6646fb4-", Namespace:"calico-system", SelfLink:"", UID:"39e4cff7-6b76-45e5-9e76-44418507cde4", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b6646fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9", Pod:"calico-kube-controllers-74b6646fb4-vqzk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d1538a245f", MAC:"da:1e:1d:33:08:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:05.188995 containerd[1577]: 2025-11-08 00:39:05.182 [INFO][4210] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9" Namespace="calico-system" Pod="calico-kube-controllers-74b6646fb4-vqzk2" WorkloadEndpoint="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:05.206732 containerd[1577]: time="2025-11-08T00:39:05.206626030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:05.206938 containerd[1577]: time="2025-11-08T00:39:05.206745110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:05.206938 containerd[1577]: time="2025-11-08T00:39:05.206795640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:05.207239 containerd[1577]: time="2025-11-08T00:39:05.206957960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:05.268599 containerd[1577]: time="2025-11-08T00:39:05.268427246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b6646fb4-vqzk2,Uid:39e4cff7-6b76-45e5-9e76-44418507cde4,Namespace:calico-system,Attempt:1,} returns sandbox id \"0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9\"" Nov 8 00:39:05.270692 containerd[1577]: time="2025-11-08T00:39:05.270473028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:39:05.409278 containerd[1577]: time="2025-11-08T00:39:05.409235956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:05.410115 containerd[1577]: time="2025-11-08T00:39:05.410038387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:39:05.410115 containerd[1577]: time="2025-11-08T00:39:05.410079427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:39:05.410221 kubelet[2675]: E1108 00:39:05.410184 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:39:05.410221 kubelet[2675]: E1108 00:39:05.410215 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:39:05.411650 kubelet[2675]: E1108 00:39:05.410312 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6j5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b6646fb4-vqzk2_calico-system(39e4cff7-6b76-45e5-9e76-44418507cde4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:05.411650 kubelet[2675]: E1108 00:39:05.411586 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:39:05.995600 containerd[1577]: time="2025-11-08T00:39:05.994877754Z" level=info msg="StopPodSandbox for \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\"" Nov 8 00:39:05.997290 containerd[1577]: time="2025-11-08T00:39:05.996179205Z" level=info msg="StopPodSandbox for \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\"" Nov 8 00:39:06.008171 containerd[1577]: time="2025-11-08T00:39:05.997467267Z" level=info msg="StopPodSandbox for \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\"" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.061 [INFO][4326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.064 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" iface="eth0" netns="/var/run/netns/cni-3a47ac82-cb92-f5e1-158a-b95500c2fcad" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.066 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" iface="eth0" netns="/var/run/netns/cni-3a47ac82-cb92-f5e1-158a-b95500c2fcad" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.067 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" iface="eth0" netns="/var/run/netns/cni-3a47ac82-cb92-f5e1-158a-b95500c2fcad" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.067 [INFO][4326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.067 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.115 [INFO][4348] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.115 [INFO][4348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.115 [INFO][4348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.126 [WARNING][4348] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.126 [INFO][4348] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.130 [INFO][4348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:06.139758 containerd[1577]: 2025-11-08 00:39:06.134 [INFO][4326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:06.141544 containerd[1577]: time="2025-11-08T00:39:06.141511294Z" level=info msg="TearDown network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\" successfully" Nov 8 00:39:06.141623 containerd[1577]: time="2025-11-08T00:39:06.141607153Z" level=info msg="StopPodSandbox for \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\" returns successfully" Nov 8 00:39:06.144671 systemd[1]: run-netns-cni\x2d3a47ac82\x2dcb92\x2df5e1\x2d158a\x2db95500c2fcad.mount: Deactivated successfully. Nov 8 00:39:06.146489 kubelet[2675]: E1108 00:39:06.145391 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:06.147469 containerd[1577]: time="2025-11-08T00:39:06.147444391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sffch,Uid:9f23c5b5-4cfa-46d8-aaba-cb061e55e03e,Namespace:kube-system,Attempt:1,}" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.083 [INFO][4331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.084 [INFO][4331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" iface="eth0" netns="/var/run/netns/cni-ecc26abe-98d3-207e-f0a2-bd97fed5b4e7" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.085 [INFO][4331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" iface="eth0" netns="/var/run/netns/cni-ecc26abe-98d3-207e-f0a2-bd97fed5b4e7" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.087 [INFO][4331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" iface="eth0" netns="/var/run/netns/cni-ecc26abe-98d3-207e-f0a2-bd97fed5b4e7" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.087 [INFO][4331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.087 [INFO][4331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.151 [INFO][4355] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.151 [INFO][4355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.151 [INFO][4355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.159 [WARNING][4355] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.159 [INFO][4355] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.161 [INFO][4355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:06.179959 containerd[1577]: 2025-11-08 00:39:06.172 [INFO][4331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:06.183471 containerd[1577]: time="2025-11-08T00:39:06.183159685Z" level=info msg="TearDown network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\" successfully" Nov 8 00:39:06.183471 containerd[1577]: time="2025-11-08T00:39:06.183183865Z" level=info msg="StopPodSandbox for \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\" returns successfully" Nov 8 00:39:06.184735 containerd[1577]: time="2025-11-08T00:39:06.184588127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-fj7f9,Uid:945f0c5d-79d5-427e-a435-dd67b16eeed0,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:39:06.186028 systemd[1]: run-netns-cni\x2decc26abe\x2d98d3\x2d207e\x2df0a2\x2dbd97fed5b4e7.mount: Deactivated successfully. Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.101 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.104 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" iface="eth0" netns="/var/run/netns/cni-d76d3166-059a-a7c5-e06f-1b7166b8bc88" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.104 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" iface="eth0" netns="/var/run/netns/cni-d76d3166-059a-a7c5-e06f-1b7166b8bc88" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.104 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" iface="eth0" netns="/var/run/netns/cni-d76d3166-059a-a7c5-e06f-1b7166b8bc88" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.104 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.104 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.152 [INFO][4361] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.153 [INFO][4361] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.162 [INFO][4361] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.186 [WARNING][4361] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.186 [INFO][4361] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.189 [INFO][4361] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:06.206887 containerd[1577]: 2025-11-08 00:39:06.196 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:06.207676 containerd[1577]: time="2025-11-08T00:39:06.207651804Z" level=info msg="TearDown network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\" successfully" Nov 8 00:39:06.207995 containerd[1577]: time="2025-11-08T00:39:06.207734214Z" level=info msg="StopPodSandbox for \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\" returns successfully" Nov 8 00:39:06.208471 containerd[1577]: time="2025-11-08T00:39:06.208420535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-qzjh9,Uid:548b6544-42df-4869-bfa7-bb27245d2cb1,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:39:06.273276 kubelet[2675]: E1108 00:39:06.273099 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:39:06.352692 systemd-networkd[1239]: caliac21d99df6c: Link UP Nov 8 00:39:06.354505 systemd-networkd[1239]: caliac21d99df6c: Gained carrier Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.250 [INFO][4382] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.261 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0 calico-apiserver-69649455c- calico-apiserver 945f0c5d-79d5-427e-a435-dd67b16eeed0 929 0 2025-11-08 00:38:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69649455c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-57-26 calico-apiserver-69649455c-fj7f9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac21d99df6c [] [] }} ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.261 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.308 [INFO][4408] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" HandleID="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.308 [INFO][4408] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" HandleID="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-57-26", "pod":"calico-apiserver-69649455c-fj7f9", "timestamp":"2025-11-08 00:39:06.308507349 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.308 [INFO][4408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.308 [INFO][4408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.308 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.320 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.324 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.328 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.330 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.332 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.332 [INFO][4408] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.334 [INFO][4408] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53 Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.339 [INFO][4408] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.344 [INFO][4408] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.67/26] block=192.168.31.64/26 handle="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.345 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.67/26] handle="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" host="172-239-57-26" Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.345 [INFO][4408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:06.381470 containerd[1577]: 2025-11-08 00:39:06.345 [INFO][4408] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.67/26] IPv6=[] ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" HandleID="k8s-pod-network.836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.382177 containerd[1577]: 2025-11-08 00:39:06.348 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"945f0c5d-79d5-427e-a435-dd67b16eeed0", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"calico-apiserver-69649455c-fj7f9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac21d99df6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:06.382177 containerd[1577]: 2025-11-08 00:39:06.348 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.67/32] ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.382177 containerd[1577]: 2025-11-08 00:39:06.348 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac21d99df6c ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.382177 containerd[1577]: 2025-11-08 00:39:06.358 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.382177 containerd[1577]: 2025-11-08 00:39:06.358 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"945f0c5d-79d5-427e-a435-dd67b16eeed0", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53", Pod:"calico-apiserver-69649455c-fj7f9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac21d99df6c", MAC:"0a:db:e5:8b:cf:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:06.382177 containerd[1577]: 2025-11-08 00:39:06.374 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-fj7f9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:06.412954 containerd[1577]: time="2025-11-08T00:39:06.412658967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:06.412954 containerd[1577]: time="2025-11-08T00:39:06.412711377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:06.412954 containerd[1577]: time="2025-11-08T00:39:06.412722077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:06.412954 containerd[1577]: time="2025-11-08T00:39:06.412824337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:06.465495 systemd-networkd[1239]: calic4cff9a3f1e: Link UP Nov 8 00:39:06.467372 systemd-networkd[1239]: calic4cff9a3f1e: Gained carrier Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.231 [INFO][4373] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.243 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0 coredns-668d6bf9bc- kube-system 9f23c5b5-4cfa-46d8-aaba-cb061e55e03e 928 0 2025-11-08 00:38:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-57-26 coredns-668d6bf9bc-sffch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4cff9a3f1e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.243 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.385 [INFO][4406] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" HandleID="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.385 [INFO][4406] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" HandleID="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-57-26", "pod":"coredns-668d6bf9bc-sffch", "timestamp":"2025-11-08 00:39:06.385247163 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.385 [INFO][4406] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.385 [INFO][4406] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.385 [INFO][4406] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.419 [INFO][4406] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.424 [INFO][4406] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.429 [INFO][4406] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.431 [INFO][4406] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.433 [INFO][4406] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.433 [INFO][4406] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.434 [INFO][4406] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.440 [INFO][4406] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.447 [INFO][4406] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.68/26] block=192.168.31.64/26 handle="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.447 [INFO][4406] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.68/26] handle="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" host="172-239-57-26" Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.447 [INFO][4406] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:06.479927 containerd[1577]: 2025-11-08 00:39:06.447 [INFO][4406] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.68/26] IPv6=[] ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" HandleID="k8s-pod-network.59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.481363 containerd[1577]: 2025-11-08 00:39:06.452 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"coredns-668d6bf9bc-sffch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4cff9a3f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:06.481363 containerd[1577]: 2025-11-08 00:39:06.454 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.68/32] ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.481363 containerd[1577]: 2025-11-08 00:39:06.454 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4cff9a3f1e ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.481363 containerd[1577]: 2025-11-08 00:39:06.466 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.481363 containerd[1577]: 2025-11-08 00:39:06.467 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe", Pod:"coredns-668d6bf9bc-sffch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4cff9a3f1e", MAC:"4a:54:19:c8:94:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:06.481363 containerd[1577]: 2025-11-08 00:39:06.477 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe" Namespace="kube-system" Pod="coredns-668d6bf9bc-sffch" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:06.527289 containerd[1577]: time="2025-11-08T00:39:06.525371305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:06.527289 containerd[1577]: time="2025-11-08T00:39:06.525442325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:06.527289 containerd[1577]: time="2025-11-08T00:39:06.525456345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:06.527289 containerd[1577]: time="2025-11-08T00:39:06.525551525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:06.538352 containerd[1577]: time="2025-11-08T00:39:06.538298370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-fj7f9,Uid:945f0c5d-79d5-427e-a435-dd67b16eeed0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53\"" Nov 8 00:39:06.542165 containerd[1577]: time="2025-11-08T00:39:06.542068875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:39:06.600368 systemd-networkd[1239]: calied4a88d2a8c: Link UP Nov 8 00:39:06.602510 systemd-networkd[1239]: calied4a88d2a8c: Gained carrier Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.289 [INFO][4391] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.326 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0 calico-apiserver-69649455c- calico-apiserver 548b6544-42df-4869-bfa7-bb27245d2cb1 930 0 2025-11-08 00:38:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69649455c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-57-26 calico-apiserver-69649455c-qzjh9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied4a88d2a8c [] [] }} ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.327 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.410 [INFO][4420] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" HandleID="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.410 [INFO][4420] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" HandleID="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-57-26", "pod":"calico-apiserver-69649455c-qzjh9", "timestamp":"2025-11-08 00:39:06.410306923 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.410 [INFO][4420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.447 [INFO][4420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.448 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.521 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.533 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.542 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.545 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.548 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.548 [INFO][4420] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.562 [INFO][4420] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.568 [INFO][4420] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.576 [INFO][4420] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.69/26] block=192.168.31.64/26 handle="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.577 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.69/26] handle="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" host="172-239-57-26" Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.577 [INFO][4420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:06.626209 containerd[1577]: 2025-11-08 00:39:06.577 [INFO][4420] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.69/26] IPv6=[] ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" HandleID="k8s-pod-network.cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.626823 containerd[1577]: 2025-11-08 00:39:06.585 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"548b6544-42df-4869-bfa7-bb27245d2cb1", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"calico-apiserver-69649455c-qzjh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied4a88d2a8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:06.626823 containerd[1577]: 2025-11-08 00:39:06.586 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.69/32] ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.626823 containerd[1577]: 2025-11-08 00:39:06.587 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied4a88d2a8c ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.626823 containerd[1577]: 2025-11-08 00:39:06.607 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.626823 containerd[1577]: 2025-11-08 00:39:06.608 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"548b6544-42df-4869-bfa7-bb27245d2cb1", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a", Pod:"calico-apiserver-69649455c-qzjh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied4a88d2a8c", MAC:"c6:f4:f8:12:23:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:06.626823 containerd[1577]: 2025-11-08 00:39:06.618 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a" Namespace="calico-apiserver" Pod="calico-apiserver-69649455c-qzjh9" WorkloadEndpoint="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:06.660526 containerd[1577]: time="2025-11-08T00:39:06.659480640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sffch,Uid:9f23c5b5-4cfa-46d8-aaba-cb061e55e03e,Namespace:kube-system,Attempt:1,} returns sandbox id \"59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe\"" Nov 8 00:39:06.662634 kubelet[2675]: E1108 00:39:06.661771 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:06.668825 containerd[1577]: time="2025-11-08T00:39:06.668763501Z" level=info msg="CreateContainer within sandbox \"59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:39:06.692817 containerd[1577]: time="2025-11-08T00:39:06.692739600Z" level=info msg="CreateContainer within sandbox \"59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ffc3e42231939ab95f98bea822014894b548660f745e1755c76ff9551c29840\"" Nov 8 00:39:06.694279 containerd[1577]: time="2025-11-08T00:39:06.692485309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:06.694279 containerd[1577]: time="2025-11-08T00:39:06.692548020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:06.694279 containerd[1577]: time="2025-11-08T00:39:06.692562870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:06.694279 containerd[1577]: time="2025-11-08T00:39:06.692660100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:06.694894 containerd[1577]: time="2025-11-08T00:39:06.694832943Z" level=info msg="StartContainer for \"6ffc3e42231939ab95f98bea822014894b548660f745e1755c76ff9551c29840\"" Nov 8 00:39:06.702261 containerd[1577]: time="2025-11-08T00:39:06.702225892Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:06.703200 containerd[1577]: time="2025-11-08T00:39:06.703067583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:39:06.703324 containerd[1577]: time="2025-11-08T00:39:06.703289164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:06.703921 kubelet[2675]: E1108 00:39:06.703523 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:06.703921 kubelet[2675]: E1108 00:39:06.703572 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:06.703921 kubelet[2675]: E1108 00:39:06.703709 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5mcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-fj7f9_calico-apiserver(945f0c5d-79d5-427e-a435-dd67b16eeed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:06.706181 kubelet[2675]: E1108 00:39:06.705445 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:39:06.733349 systemd-networkd[1239]: cali9d1538a245f: Gained IPv6LL Nov 8 00:39:06.824994 containerd[1577]: time="2025-11-08T00:39:06.824923092Z" level=info msg="StartContainer for \"6ffc3e42231939ab95f98bea822014894b548660f745e1755c76ff9551c29840\" returns successfully" Nov 8 00:39:06.926077 containerd[1577]: time="2025-11-08T00:39:06.926006357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69649455c-qzjh9,Uid:548b6544-42df-4869-bfa7-bb27245d2cb1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a\"" Nov 8 00:39:06.929347 containerd[1577]: time="2025-11-08T00:39:06.929270571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:39:06.994839 containerd[1577]: time="2025-11-08T00:39:06.994758681Z" level=info msg="StopPodSandbox for \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\"" Nov 8 00:39:07.061851 containerd[1577]: time="2025-11-08T00:39:07.061778274Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:07.063848 containerd[1577]: time="2025-11-08T00:39:07.063791377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:39:07.065159 containerd[1577]: time="2025-11-08T00:39:07.063927457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:07.066330 kubelet[2675]: E1108 00:39:07.065331 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:07.066330 kubelet[2675]: E1108 00:39:07.065386 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:07.066330 kubelet[2675]: E1108 00:39:07.065503 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqxqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-qzjh9_calico-apiserver(548b6544-42df-4869-bfa7-bb27245d2cb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:07.068027 kubelet[2675]: E1108 00:39:07.067353 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.045 [INFO][4638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.045 [INFO][4638] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" iface="eth0" netns="/var/run/netns/cni-deeb7ab6-d7fe-0710-0104-c3f7e1b4105d" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.046 [INFO][4638] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" iface="eth0" netns="/var/run/netns/cni-deeb7ab6-d7fe-0710-0104-c3f7e1b4105d" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.047 [INFO][4638] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" iface="eth0" netns="/var/run/netns/cni-deeb7ab6-d7fe-0710-0104-c3f7e1b4105d" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.047 [INFO][4638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.047 [INFO][4638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.079 [INFO][4646] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.079 [INFO][4646] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.079 [INFO][4646] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.086 [WARNING][4646] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.086 [INFO][4646] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.089 [INFO][4646] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:07.096072 containerd[1577]: 2025-11-08 00:39:07.092 [INFO][4638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:07.097050 containerd[1577]: time="2025-11-08T00:39:07.097002898Z" level=info msg="TearDown network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\" successfully" Nov 8 00:39:07.097100 containerd[1577]: time="2025-11-08T00:39:07.097050578Z" level=info msg="StopPodSandbox for \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\" returns successfully" Nov 8 00:39:07.098328 containerd[1577]: time="2025-11-08T00:39:07.098297120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mdrsj,Uid:3e31263c-8cf9-4e4b-a04e-7c52af3f73c1,Namespace:calico-system,Attempt:1,}" Nov 8 00:39:07.156968 systemd[1]: run-netns-cni\x2dd76d3166\x2d059a\x2da7c5\x2de06f\x2d1b7166b8bc88.mount: Deactivated successfully. Nov 8 00:39:07.157608 systemd[1]: run-netns-cni\x2ddeeb7ab6\x2dd7fe\x2d0710\x2d0104\x2dc3f7e1b4105d.mount: Deactivated successfully. Nov 8 00:39:07.233267 systemd-networkd[1239]: cali59cb3b51be7: Link UP Nov 8 00:39:07.233734 systemd-networkd[1239]: cali59cb3b51be7: Gained carrier Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.134 [INFO][4652] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.154 [INFO][4652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-csi--node--driver--mdrsj-eth0 csi-node-driver- calico-system 3e31263c-8cf9-4e4b-a04e-7c52af3f73c1 955 0 2025-11-08 00:38:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-57-26 csi-node-driver-mdrsj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali59cb3b51be7 [] [] }} ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.155 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.190 [INFO][4665] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" HandleID="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.191 [INFO][4665] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" HandleID="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-26", "pod":"csi-node-driver-mdrsj", "timestamp":"2025-11-08 00:39:07.190955315 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.191 [INFO][4665] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.191 [INFO][4665] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.191 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.198 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.203 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.207 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.209 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.211 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.211 [INFO][4665] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.212 [INFO][4665] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53 Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.216 [INFO][4665] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.224 [INFO][4665] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.70/26] block=192.168.31.64/26 handle="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.224 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.70/26] handle="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" host="172-239-57-26" Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.224 [INFO][4665] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:07.251450 containerd[1577]: 2025-11-08 00:39:07.224 [INFO][4665] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.70/26] IPv6=[] ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" HandleID="k8s-pod-network.79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.252239 containerd[1577]: 2025-11-08 00:39:07.228 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-csi--node--driver--mdrsj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"csi-node-driver-mdrsj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59cb3b51be7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:07.252239 containerd[1577]: 2025-11-08 00:39:07.228 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.70/32] ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.252239 containerd[1577]: 2025-11-08 00:39:07.228 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59cb3b51be7 ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.252239 containerd[1577]: 2025-11-08 00:39:07.232 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.252239 containerd[1577]: 2025-11-08 00:39:07.233 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-csi--node--driver--mdrsj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53", Pod:"csi-node-driver-mdrsj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59cb3b51be7", MAC:"de:bd:b4:92:ed:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:07.252239 containerd[1577]: 2025-11-08 00:39:07.243 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53" Namespace="calico-system" Pod="csi-node-driver-mdrsj" WorkloadEndpoint="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:07.276721 containerd[1577]: time="2025-11-08T00:39:07.273115897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:07.276721 containerd[1577]: time="2025-11-08T00:39:07.273764638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:07.276721 containerd[1577]: time="2025-11-08T00:39:07.273777218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:07.276721 containerd[1577]: time="2025-11-08T00:39:07.273854818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:07.276862 kubelet[2675]: E1108 00:39:07.274348 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:39:07.286700 kubelet[2675]: E1108 00:39:07.286657 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:07.300622 kubelet[2675]: E1108 00:39:07.300583 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:39:07.303939 kubelet[2675]: E1108 00:39:07.303717 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:39:07.335174 kubelet[2675]: I1108 00:39:07.334126 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sffch" podStartSLOduration=32.334109703 podStartE2EDuration="32.334109703s" podCreationTimestamp="2025-11-08 00:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:39:07.312406126 +0000 UTC m=+39.427876222" watchObservedRunningTime="2025-11-08 00:39:07.334109703 +0000 UTC m=+39.449579789" Nov 8 00:39:07.380361 containerd[1577]: time="2025-11-08T00:39:07.380274981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mdrsj,Uid:3e31263c-8cf9-4e4b-a04e-7c52af3f73c1,Namespace:calico-system,Attempt:1,} returns sandbox id \"79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53\"" Nov 8 00:39:07.382483 containerd[1577]: time="2025-11-08T00:39:07.382456113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:39:07.532798 containerd[1577]: time="2025-11-08T00:39:07.532742591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:07.533828 containerd[1577]: time="2025-11-08T00:39:07.533790651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:39:07.533956 containerd[1577]: time="2025-11-08T00:39:07.533871671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:39:07.534022 kubelet[2675]: E1108 00:39:07.533987 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:39:07.534115 kubelet[2675]: E1108 00:39:07.534034 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:39:07.534219 kubelet[2675]: E1108 00:39:07.534169 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:07.536416 containerd[1577]: time="2025-11-08T00:39:07.536212205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:39:07.565337 systemd-networkd[1239]: caliac21d99df6c: Gained IPv6LL Nov 8 00:39:07.675237 containerd[1577]: time="2025-11-08T00:39:07.674965017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:07.676188 containerd[1577]: time="2025-11-08T00:39:07.676058909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:39:07.676339 containerd[1577]: time="2025-11-08T00:39:07.676291679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:39:07.676600 kubelet[2675]: E1108 00:39:07.676520 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:39:07.677374 kubelet[2675]: E1108 00:39:07.677092 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:39:07.677374 kubelet[2675]: E1108 00:39:07.677264 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:07.679149 kubelet[2675]: E1108 00:39:07.679061 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:39:07.801043 kubelet[2675]: I1108 00:39:07.799927 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:39:07.801043 kubelet[2675]: E1108 00:39:07.800836 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:07.996576 containerd[1577]: time="2025-11-08T00:39:07.996001057Z" level=info msg="StopPodSandbox for \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\"" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.075 [INFO][4794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.076 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" iface="eth0" netns="/var/run/netns/cni-df453efc-88b9-7abb-ce2d-e6d1dc4046db" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.076 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" iface="eth0" netns="/var/run/netns/cni-df453efc-88b9-7abb-ce2d-e6d1dc4046db" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.077 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" iface="eth0" netns="/var/run/netns/cni-df453efc-88b9-7abb-ce2d-e6d1dc4046db" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.077 [INFO][4794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.077 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.103 [INFO][4802] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.104 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.104 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.109 [WARNING][4802] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.109 [INFO][4802] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.110 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:08.120200 containerd[1577]: 2025-11-08 00:39:08.113 [INFO][4794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:08.120200 containerd[1577]: time="2025-11-08T00:39:08.119988004Z" level=info msg="TearDown network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\" successfully" Nov 8 00:39:08.120200 containerd[1577]: time="2025-11-08T00:39:08.120014534Z" level=info msg="StopPodSandbox for \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\" returns successfully" Nov 8 00:39:08.121565 containerd[1577]: time="2025-11-08T00:39:08.120634375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxx8g,Uid:c4fcf74c-bacb-403a-b9d1-404b70dbc1f8,Namespace:calico-system,Attempt:1,}" Nov 8 00:39:08.148031 systemd[1]: run-netns-cni\x2ddf453efc\x2d88b9\x2d7abb\x2dce2d\x2de6d1dc4046db.mount: Deactivated successfully. Nov 8 00:39:08.252846 systemd-networkd[1239]: cali96329b68488: Link UP Nov 8 00:39:08.253793 systemd-networkd[1239]: cali96329b68488: Gained carrier Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.157 [INFO][4809] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.173 [INFO][4809] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0 goldmane-666569f655- calico-system c4fcf74c-bacb-403a-b9d1-404b70dbc1f8 993 0 2025-11-08 00:38:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-57-26 goldmane-666569f655-gxx8g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali96329b68488 [] [] }} ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.173 [INFO][4809] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.211 [INFO][4822] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" HandleID="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.212 [INFO][4822] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" HandleID="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333640), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-26", "pod":"goldmane-666569f655-gxx8g", "timestamp":"2025-11-08 00:39:08.211869609 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.212 [INFO][4822] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.212 [INFO][4822] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.212 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.217 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.224 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.228 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.229 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.232 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.233 [INFO][4822] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.234 [INFO][4822] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.239 [INFO][4822] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.245 [INFO][4822] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.71/26] block=192.168.31.64/26 handle="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.245 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.71/26] handle="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" host="172-239-57-26" Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.245 [INFO][4822] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:08.271434 containerd[1577]: 2025-11-08 00:39:08.245 [INFO][4822] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.71/26] IPv6=[] ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" HandleID="k8s-pod-network.5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.273109 containerd[1577]: 2025-11-08 00:39:08.248 [INFO][4809] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"goldmane-666569f655-gxx8g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96329b68488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:08.273109 containerd[1577]: 2025-11-08 00:39:08.248 [INFO][4809] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.71/32] ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.273109 containerd[1577]: 2025-11-08 00:39:08.248 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96329b68488 ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.273109 containerd[1577]: 2025-11-08 00:39:08.254 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.273109 containerd[1577]: 2025-11-08 00:39:08.255 [INFO][4809] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c", Pod:"goldmane-666569f655-gxx8g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96329b68488", MAC:"f2:a0:91:ae:f6:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:08.273109 containerd[1577]: 2025-11-08 00:39:08.267 [INFO][4809] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c" Namespace="calico-system" Pod="goldmane-666569f655-gxx8g" WorkloadEndpoint="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:08.291421 containerd[1577]: time="2025-11-08T00:39:08.291121659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:08.292228 containerd[1577]: time="2025-11-08T00:39:08.292006200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:08.292228 containerd[1577]: time="2025-11-08T00:39:08.292033810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:08.292511 containerd[1577]: time="2025-11-08T00:39:08.292399540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:08.304794 kubelet[2675]: E1108 00:39:08.304767 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:08.307416 kubelet[2675]: E1108 00:39:08.307203 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:39:08.315337 kubelet[2675]: E1108 00:39:08.315287 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:39:08.315609 kubelet[2675]: E1108 00:39:08.315394 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:39:08.387515 containerd[1577]: time="2025-11-08T00:39:08.387484411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxx8g,Uid:c4fcf74c-bacb-403a-b9d1-404b70dbc1f8,Namespace:calico-system,Attempt:1,} returns sandbox id \"5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c\"" Nov 8 00:39:08.390179 containerd[1577]: time="2025-11-08T00:39:08.389529393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:39:08.461376 systemd-networkd[1239]: calied4a88d2a8c: Gained IPv6LL Nov 8 00:39:08.516169 containerd[1577]: time="2025-11-08T00:39:08.516070413Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:08.517651 containerd[1577]: time="2025-11-08T00:39:08.517590314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:39:08.517835 containerd[1577]: time="2025-11-08T00:39:08.517700434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:08.517928 kubelet[2675]: E1108 00:39:08.517842 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:39:08.517928 kubelet[2675]: E1108 00:39:08.517875 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:39:08.518092 kubelet[2675]: E1108 00:39:08.517972 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtp5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxx8g_calico-system(c4fcf74c-bacb-403a-b9d1-404b70dbc1f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:08.519440 kubelet[2675]: E1108 00:39:08.519360 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:39:08.525263 systemd-networkd[1239]: calic4cff9a3f1e: Gained IPv6LL Nov 8 00:39:08.973296 systemd-networkd[1239]: cali59cb3b51be7: Gained IPv6LL Nov 8 00:39:08.994109 containerd[1577]: time="2025-11-08T00:39:08.994069635Z" level=info msg="StopPodSandbox for \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\"" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.036 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.037 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" iface="eth0" netns="/var/run/netns/cni-55ebcdc1-b979-3d35-5bc1-87e206c8ffed" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.038 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" iface="eth0" netns="/var/run/netns/cni-55ebcdc1-b979-3d35-5bc1-87e206c8ffed" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.039 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" iface="eth0" netns="/var/run/netns/cni-55ebcdc1-b979-3d35-5bc1-87e206c8ffed" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.039 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.039 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.063 [INFO][4898] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.064 [INFO][4898] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.064 [INFO][4898] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.072 [WARNING][4898] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.072 [INFO][4898] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.073 [INFO][4898] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:09.084083 containerd[1577]: 2025-11-08 00:39:09.076 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:09.086322 containerd[1577]: time="2025-11-08T00:39:09.084623930Z" level=info msg="TearDown network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\" successfully" Nov 8 00:39:09.086322 containerd[1577]: time="2025-11-08T00:39:09.084649780Z" level=info msg="StopPodSandbox for \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\" returns successfully" Nov 8 00:39:09.088866 kubelet[2675]: E1108 00:39:09.088344 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:09.090065 containerd[1577]: time="2025-11-08T00:39:09.089790797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwjrp,Uid:be1ef78a-3d23-4e67-a9e4-62513d5dd793,Namespace:kube-system,Attempt:1,}" Nov 8 00:39:09.090615 systemd[1]: run-netns-cni\x2d55ebcdc1\x2db979\x2d3d35\x2d5bc1\x2d87e206c8ffed.mount: Deactivated successfully. Nov 8 00:39:09.243953 systemd-networkd[1239]: cali11b75a135e8: Link UP Nov 8 00:39:09.245753 systemd-networkd[1239]: cali11b75a135e8: Gained carrier Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.158 [INFO][4915] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.176 [INFO][4915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0 coredns-668d6bf9bc- kube-system be1ef78a-3d23-4e67-a9e4-62513d5dd793 1014 0 2025-11-08 00:38:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-57-26 coredns-668d6bf9bc-zwjrp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali11b75a135e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.176 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.205 [INFO][4929] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" HandleID="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.205 [INFO][4929] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" HandleID="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-57-26", "pod":"coredns-668d6bf9bc-zwjrp", "timestamp":"2025-11-08 00:39:09.205674675 +0000 UTC"}, Hostname:"172-239-57-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.205 [INFO][4929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.205 [INFO][4929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.206 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-26' Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.211 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.215 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.219 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.220 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.222 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.222 [INFO][4929] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.224 [INFO][4929] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0 Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.232 [INFO][4929] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.237 [INFO][4929] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.72/26] block=192.168.31.64/26 handle="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.237 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.72/26] handle="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" host="172-239-57-26" Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.237 [INFO][4929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:09.259044 containerd[1577]: 2025-11-08 00:39:09.237 [INFO][4929] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.72/26] IPv6=[] ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" HandleID="k8s-pod-network.41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.259932 containerd[1577]: 2025-11-08 00:39:09.240 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be1ef78a-3d23-4e67-a9e4-62513d5dd793", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"", Pod:"coredns-668d6bf9bc-zwjrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b75a135e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:09.259932 containerd[1577]: 2025-11-08 00:39:09.240 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.72/32] ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.259932 containerd[1577]: 2025-11-08 00:39:09.241 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11b75a135e8 ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.259932 containerd[1577]: 2025-11-08 00:39:09.244 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.259932 containerd[1577]: 2025-11-08 00:39:09.245 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be1ef78a-3d23-4e67-a9e4-62513d5dd793", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0", Pod:"coredns-668d6bf9bc-zwjrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b75a135e8", MAC:"06:fa:ee:95:5e:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:09.259932 containerd[1577]: 2025-11-08 00:39:09.252 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwjrp" WorkloadEndpoint="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:09.279720 containerd[1577]: time="2025-11-08T00:39:09.279496299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:09.279720 containerd[1577]: time="2025-11-08T00:39:09.279547879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:09.279720 containerd[1577]: time="2025-11-08T00:39:09.279561199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:09.279720 containerd[1577]: time="2025-11-08T00:39:09.279642629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:09.316469 kubelet[2675]: E1108 00:39:09.316428 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:39:09.318903 kubelet[2675]: E1108 00:39:09.318547 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:09.323691 kubelet[2675]: E1108 00:39:09.323576 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:39:09.366498 containerd[1577]: time="2025-11-08T00:39:09.366447320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwjrp,Uid:be1ef78a-3d23-4e67-a9e4-62513d5dd793,Namespace:kube-system,Attempt:1,} returns sandbox id \"41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0\"" Nov 8 00:39:09.367244 kubelet[2675]: E1108 00:39:09.367205 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:09.374528 containerd[1577]: time="2025-11-08T00:39:09.374280400Z" level=info msg="CreateContainer within sandbox \"41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:39:09.386675 containerd[1577]: time="2025-11-08T00:39:09.386649755Z" level=info msg="CreateContainer within sandbox \"41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fee54e986ab791446b0db88f8078d8cca55c4626d7b813b57c81e4f9dc7bad87\"" Nov 8 00:39:09.387953 containerd[1577]: time="2025-11-08T00:39:09.387927367Z" level=info msg="StartContainer for \"fee54e986ab791446b0db88f8078d8cca55c4626d7b813b57c81e4f9dc7bad87\"" Nov 8 00:39:09.448341 containerd[1577]: time="2025-11-08T00:39:09.448278884Z" level=info msg="StartContainer for \"fee54e986ab791446b0db88f8078d8cca55c4626d7b813b57c81e4f9dc7bad87\" returns successfully" Nov 8 00:39:09.871038 systemd-networkd[1239]: cali96329b68488: Gained IPv6LL Nov 8 00:39:10.316804 kubelet[2675]: E1108 00:39:10.316744 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:10.318713 kubelet[2675]: E1108 00:39:10.318672 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:39:10.344886 kubelet[2675]: I1108 00:39:10.343249 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwjrp" podStartSLOduration=35.343232031 podStartE2EDuration="35.343232031s" podCreationTimestamp="2025-11-08 00:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:39:10.32729509 +0000 UTC m=+42.442765206" watchObservedRunningTime="2025-11-08 00:39:10.343232031 +0000 UTC m=+42.458702117" Nov 8 00:39:11.022566 systemd-networkd[1239]: cali11b75a135e8: Gained IPv6LL Nov 8 00:39:11.323341 kubelet[2675]: E1108 00:39:11.323207 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:12.327607 kubelet[2675]: E1108 00:39:12.327510 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:12.994554 containerd[1577]: time="2025-11-08T00:39:12.994467729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:39:13.132230 containerd[1577]: time="2025-11-08T00:39:13.131925821Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:13.133519 containerd[1577]: time="2025-11-08T00:39:13.133093013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:39:13.133519 containerd[1577]: time="2025-11-08T00:39:13.133262293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:39:13.133868 kubelet[2675]: E1108 00:39:13.133549 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:39:13.133868 kubelet[2675]: E1108 00:39:13.133608 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:39:13.134166 kubelet[2675]: E1108 00:39:13.133846 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:446bc2bb53ee4664a662201ea699a9cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:13.139788 containerd[1577]: time="2025-11-08T00:39:13.139606071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:39:13.268885 containerd[1577]: time="2025-11-08T00:39:13.268643762Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:13.269876 containerd[1577]: time="2025-11-08T00:39:13.269826084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:39:13.269988 containerd[1577]: time="2025-11-08T00:39:13.269933994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:39:13.270296 kubelet[2675]: E1108 00:39:13.270200 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:39:13.270478 kubelet[2675]: E1108 00:39:13.270345 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:39:13.270582 kubelet[2675]: E1108 00:39:13.270530 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:13.272573 kubelet[2675]: E1108 00:39:13.272482 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:39:15.719999 kubelet[2675]: I1108 00:39:15.719964 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:39:15.721601 kubelet[2675]: E1108 00:39:15.720683 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:16.339866 kubelet[2675]: E1108 00:39:16.339096 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:16.674202 kernel: bpftool[5181]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:39:16.955090 systemd-networkd[1239]: vxlan.calico: Link UP Nov 8 00:39:16.955116 systemd-networkd[1239]: vxlan.calico: Gained carrier Nov 8 00:39:18.125598 systemd-networkd[1239]: vxlan.calico: Gained IPv6LL Nov 8 00:39:19.995013 containerd[1577]: time="2025-11-08T00:39:19.994642088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:39:20.123726 containerd[1577]: time="2025-11-08T00:39:20.123448757Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:20.124965 containerd[1577]: time="2025-11-08T00:39:20.124803410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:39:20.124965 containerd[1577]: time="2025-11-08T00:39:20.124909910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:39:20.125170 kubelet[2675]: E1108 00:39:20.125049 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:39:20.125170 kubelet[2675]: E1108 00:39:20.125090 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:39:20.125567 kubelet[2675]: E1108 00:39:20.125231 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6j5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b6646fb4-vqzk2_calico-system(39e4cff7-6b76-45e5-9e76-44418507cde4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:20.126739 kubelet[2675]: E1108 00:39:20.126682 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:39:21.995271 containerd[1577]: time="2025-11-08T00:39:21.995011117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:39:22.152939 containerd[1577]: time="2025-11-08T00:39:22.152895597Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:22.154056 containerd[1577]: time="2025-11-08T00:39:22.154020799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:39:22.154164 containerd[1577]: time="2025-11-08T00:39:22.154088909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:39:22.154251 kubelet[2675]: E1108 00:39:22.154215 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:39:22.155326 kubelet[2675]: E1108 00:39:22.154258 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:39:22.155326 kubelet[2675]: E1108 00:39:22.154361 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:22.156658 containerd[1577]: time="2025-11-08T00:39:22.156637633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:39:22.284624 containerd[1577]: time="2025-11-08T00:39:22.284446393Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:22.286099 containerd[1577]: time="2025-11-08T00:39:22.285913505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:39:22.286099 containerd[1577]: time="2025-11-08T00:39:22.285973095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:39:22.286365 kubelet[2675]: E1108 00:39:22.286282 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:39:22.286365 kubelet[2675]: E1108 00:39:22.286337 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:39:22.286522 kubelet[2675]: E1108 00:39:22.286468 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:22.287984 kubelet[2675]: E1108 00:39:22.287914 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:39:22.994331 containerd[1577]: time="2025-11-08T00:39:22.994067979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:39:23.131248 containerd[1577]: time="2025-11-08T00:39:23.131220452Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:23.132547 containerd[1577]: time="2025-11-08T00:39:23.132496104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:39:23.132650 containerd[1577]: time="2025-11-08T00:39:23.132547934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:23.132724 kubelet[2675]: E1108 00:39:23.132645 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:39:23.132724 kubelet[2675]: E1108 00:39:23.132677 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:39:23.132945 kubelet[2675]: E1108 00:39:23.132840 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtp5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxx8g_calico-system(c4fcf74c-bacb-403a-b9d1-404b70dbc1f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:23.133557 containerd[1577]: time="2025-11-08T00:39:23.133516535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:39:23.134484 kubelet[2675]: E1108 00:39:23.134236 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:39:23.264180 containerd[1577]: time="2025-11-08T00:39:23.263975759Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:23.265243 containerd[1577]: time="2025-11-08T00:39:23.265161481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:39:23.265243 containerd[1577]: time="2025-11-08T00:39:23.265203101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:23.265437 kubelet[2675]: E1108 00:39:23.265345 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:23.265437 kubelet[2675]: E1108 00:39:23.265386 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:23.266519 kubelet[2675]: E1108 00:39:23.265603 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqxqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-qzjh9_calico-apiserver(548b6544-42df-4869-bfa7-bb27245d2cb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:23.266628 containerd[1577]: time="2025-11-08T00:39:23.266065302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:39:23.266817 kubelet[2675]: E1108 00:39:23.266771 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:39:23.395473 containerd[1577]: time="2025-11-08T00:39:23.395394974Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:23.396412 containerd[1577]: time="2025-11-08T00:39:23.396333355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:39:23.396412 containerd[1577]: time="2025-11-08T00:39:23.396386296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:23.396712 kubelet[2675]: E1108 00:39:23.396577 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:23.396712 kubelet[2675]: E1108 00:39:23.396610 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:23.397689 kubelet[2675]: E1108 00:39:23.396772 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5mcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-fj7f9_calico-apiserver(945f0c5d-79d5-427e-a435-dd67b16eeed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:23.397940 kubelet[2675]: E1108 00:39:23.397899 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:39:28.000716 containerd[1577]: time="2025-11-08T00:39:27.994371059Z" level=info msg="StopPodSandbox for \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\"" Nov 8 00:39:28.016680 kubelet[2675]: E1108 00:39:28.016592 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.081 [WARNING][5312] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0", GenerateName:"calico-kube-controllers-74b6646fb4-", Namespace:"calico-system", SelfLink:"", UID:"39e4cff7-6b76-45e5-9e76-44418507cde4", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b6646fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9", Pod:"calico-kube-controllers-74b6646fb4-vqzk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d1538a245f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.083 [INFO][5312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.083 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" iface="eth0" netns="" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.083 [INFO][5312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.084 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.116 [INFO][5319] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.117 [INFO][5319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.117 [INFO][5319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.123 [WARNING][5319] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.123 [INFO][5319] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.125 [INFO][5319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.130909 containerd[1577]: 2025-11-08 00:39:28.128 [INFO][5312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.131471 containerd[1577]: time="2025-11-08T00:39:28.130953945Z" level=info msg="TearDown network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\" successfully" Nov 8 00:39:28.131471 containerd[1577]: time="2025-11-08T00:39:28.130987625Z" level=info msg="StopPodSandbox for \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\" returns successfully" Nov 8 00:39:28.132110 containerd[1577]: time="2025-11-08T00:39:28.131785796Z" level=info msg="RemovePodSandbox for \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\"" Nov 8 00:39:28.132110 containerd[1577]: time="2025-11-08T00:39:28.131819696Z" level=info msg="Forcibly stopping sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\"" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.166 [WARNING][5334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0", GenerateName:"calico-kube-controllers-74b6646fb4-", Namespace:"calico-system", SelfLink:"", UID:"39e4cff7-6b76-45e5-9e76-44418507cde4", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b6646fb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"0a52b07a7e8313b1be476ea87f63149e3e0b45f7746763f765bed70da38cf6b9", Pod:"calico-kube-controllers-74b6646fb4-vqzk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d1538a245f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.166 [INFO][5334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.166 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" iface="eth0" netns="" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.166 [INFO][5334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.166 [INFO][5334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.192 [INFO][5341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.192 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.192 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.197 [WARNING][5341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.197 [INFO][5341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" HandleID="k8s-pod-network.4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Workload="172--239--57--26-k8s-calico--kube--controllers--74b6646fb4--vqzk2-eth0" Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.199 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.205246 containerd[1577]: 2025-11-08 00:39:28.202 [INFO][5334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef" Nov 8 00:39:28.205246 containerd[1577]: time="2025-11-08T00:39:28.204568550Z" level=info msg="TearDown network for sandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\" successfully" Nov 8 00:39:28.211744 containerd[1577]: time="2025-11-08T00:39:28.211676851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:28.211863 containerd[1577]: time="2025-11-08T00:39:28.211761261Z" level=info msg="RemovePodSandbox \"4f6d88fe66f07d5738c66d0c726c32dd08e19c6de7d307b9e5e047298805acef\" returns successfully" Nov 8 00:39:28.212522 containerd[1577]: time="2025-11-08T00:39:28.212422732Z" level=info msg="StopPodSandbox for \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\"" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.253 [WARNING][5355] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-csi--node--driver--mdrsj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53", Pod:"csi-node-driver-mdrsj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59cb3b51be7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.253 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.253 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" iface="eth0" netns="" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.253 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.253 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.285 [INFO][5362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.286 [INFO][5362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.286 [INFO][5362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.291 [WARNING][5362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.291 [INFO][5362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.293 [INFO][5362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.298470 containerd[1577]: 2025-11-08 00:39:28.295 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.298470 containerd[1577]: time="2025-11-08T00:39:28.298220285Z" level=info msg="TearDown network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\" successfully" Nov 8 00:39:28.298470 containerd[1577]: time="2025-11-08T00:39:28.298244005Z" level=info msg="StopPodSandbox for \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\" returns successfully" Nov 8 00:39:28.299630 containerd[1577]: time="2025-11-08T00:39:28.298620305Z" level=info msg="RemovePodSandbox for \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\"" Nov 8 00:39:28.299630 containerd[1577]: time="2025-11-08T00:39:28.298644615Z" level=info msg="Forcibly stopping sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\"" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.332 [WARNING][5378] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-csi--node--driver--mdrsj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e31263c-8cf9-4e4b-a04e-7c52af3f73c1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"79610995e6191c8aad312dc0ff7094694f1e7ca2f6e8d20e1fb0ec787a1cdc53", Pod:"csi-node-driver-mdrsj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59cb3b51be7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.332 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.332 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" iface="eth0" netns="" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.332 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.332 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.354 [INFO][5385] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.354 [INFO][5385] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.354 [INFO][5385] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.359 [WARNING][5385] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.359 [INFO][5385] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" HandleID="k8s-pod-network.721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Workload="172--239--57--26-k8s-csi--node--driver--mdrsj-eth0" Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.360 [INFO][5385] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.365359 containerd[1577]: 2025-11-08 00:39:28.362 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96" Nov 8 00:39:28.365788 containerd[1577]: time="2025-11-08T00:39:28.365382332Z" level=info msg="TearDown network for sandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\" successfully" Nov 8 00:39:28.369387 containerd[1577]: time="2025-11-08T00:39:28.369359206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:28.369472 containerd[1577]: time="2025-11-08T00:39:28.369395457Z" level=info msg="RemovePodSandbox \"721fcdde0fdd3bcaae45f80a46d7a64b1139d94acbebd0fb783e2ea740410d96\" returns successfully" Nov 8 00:39:28.369958 containerd[1577]: time="2025-11-08T00:39:28.369706057Z" level=info msg="StopPodSandbox for \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\"" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.402 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"548b6544-42df-4869-bfa7-bb27245d2cb1", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a", Pod:"calico-apiserver-69649455c-qzjh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied4a88d2a8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.402 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.402 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" iface="eth0" netns="" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.402 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.402 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.425 [INFO][5406] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.425 [INFO][5406] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.425 [INFO][5406] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.430 [WARNING][5406] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.430 [INFO][5406] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.431 [INFO][5406] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.436473 containerd[1577]: 2025-11-08 00:39:28.434 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.436936 containerd[1577]: time="2025-11-08T00:39:28.436568033Z" level=info msg="TearDown network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\" successfully" Nov 8 00:39:28.436936 containerd[1577]: time="2025-11-08T00:39:28.436595273Z" level=info msg="StopPodSandbox for \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\" returns successfully" Nov 8 00:39:28.437373 containerd[1577]: time="2025-11-08T00:39:28.437348314Z" level=info msg="RemovePodSandbox for \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\"" Nov 8 00:39:28.437373 containerd[1577]: time="2025-11-08T00:39:28.437378694Z" level=info msg="Forcibly stopping sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\"" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.471 [WARNING][5420] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"548b6544-42df-4869-bfa7-bb27245d2cb1", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"cef9ca5f1ce0b8b597dcbf4d2d964fe10708f37687ea86392c9b0b118201717a", Pod:"calico-apiserver-69649455c-qzjh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied4a88d2a8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.472 [INFO][5420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.472 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" iface="eth0" netns="" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.472 [INFO][5420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.472 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.494 [INFO][5427] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.494 [INFO][5427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.494 [INFO][5427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.499 [WARNING][5427] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.499 [INFO][5427] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" HandleID="k8s-pod-network.d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--qzjh9-eth0" Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.501 [INFO][5427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.505307 containerd[1577]: 2025-11-08 00:39:28.503 [INFO][5420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb" Nov 8 00:39:28.505724 containerd[1577]: time="2025-11-08T00:39:28.505332331Z" level=info msg="TearDown network for sandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\" successfully" Nov 8 00:39:28.508753 containerd[1577]: time="2025-11-08T00:39:28.508707136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:28.508810 containerd[1577]: time="2025-11-08T00:39:28.508764357Z" level=info msg="RemovePodSandbox \"d5556e6cfd0cc39555ec4bb5947a496a774045336ed4a2072646432858e558bb\" returns successfully" Nov 8 00:39:28.509531 containerd[1577]: time="2025-11-08T00:39:28.509275137Z" level=info msg="StopPodSandbox for \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\"" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.543 [WARNING][5441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"945f0c5d-79d5-427e-a435-dd67b16eeed0", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53", Pod:"calico-apiserver-69649455c-fj7f9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac21d99df6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.543 [INFO][5441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.543 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" iface="eth0" netns="" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.543 [INFO][5441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.543 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.564 [INFO][5448] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.564 [INFO][5448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.564 [INFO][5448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.569 [WARNING][5448] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.569 [INFO][5448] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.571 [INFO][5448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.575810 containerd[1577]: 2025-11-08 00:39:28.573 [INFO][5441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.575810 containerd[1577]: time="2025-11-08T00:39:28.575750033Z" level=info msg="TearDown network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\" successfully" Nov 8 00:39:28.575810 containerd[1577]: time="2025-11-08T00:39:28.575773983Z" level=info msg="StopPodSandbox for \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\" returns successfully" Nov 8 00:39:28.576568 containerd[1577]: time="2025-11-08T00:39:28.576527974Z" level=info msg="RemovePodSandbox for \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\"" Nov 8 00:39:28.576568 containerd[1577]: time="2025-11-08T00:39:28.576558984Z" level=info msg="Forcibly stopping sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\"" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.610 [WARNING][5462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0", GenerateName:"calico-apiserver-69649455c-", Namespace:"calico-apiserver", SelfLink:"", UID:"945f0c5d-79d5-427e-a435-dd67b16eeed0", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69649455c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"836f2076670e9ce87b1c33bc01902a404a8888697c9155901416eebb0d50be53", Pod:"calico-apiserver-69649455c-fj7f9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac21d99df6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.610 [INFO][5462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.610 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" iface="eth0" netns="" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.610 [INFO][5462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.610 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.631 [INFO][5469] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.631 [INFO][5469] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.632 [INFO][5469] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.637 [WARNING][5469] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.637 [INFO][5469] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" HandleID="k8s-pod-network.f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Workload="172--239--57--26-k8s-calico--apiserver--69649455c--fj7f9-eth0" Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.639 [INFO][5469] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.643676 containerd[1577]: 2025-11-08 00:39:28.641 [INFO][5462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf" Nov 8 00:39:28.644289 containerd[1577]: time="2025-11-08T00:39:28.643749980Z" level=info msg="TearDown network for sandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\" successfully" Nov 8 00:39:28.647019 containerd[1577]: time="2025-11-08T00:39:28.646959245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:28.647075 containerd[1577]: time="2025-11-08T00:39:28.647023705Z" level=info msg="RemovePodSandbox \"f5eb7f99e471c608430db15a983b1168f0cdd81cb26fa0492dc79b176589eeaf\" returns successfully" Nov 8 00:39:28.647580 containerd[1577]: time="2025-11-08T00:39:28.647555426Z" level=info msg="StopPodSandbox for \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\"" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.682 [WARNING][5484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c", Pod:"goldmane-666569f655-gxx8g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96329b68488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.682 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.682 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" iface="eth0" netns="" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.682 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.682 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.714 [INFO][5491] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.714 [INFO][5491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.714 [INFO][5491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.719 [WARNING][5491] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.719 [INFO][5491] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.720 [INFO][5491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.725176 containerd[1577]: 2025-11-08 00:39:28.722 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.725176 containerd[1577]: time="2025-11-08T00:39:28.724992647Z" level=info msg="TearDown network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\" successfully" Nov 8 00:39:28.725176 containerd[1577]: time="2025-11-08T00:39:28.725015367Z" level=info msg="StopPodSandbox for \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\" returns successfully" Nov 8 00:39:28.726264 containerd[1577]: time="2025-11-08T00:39:28.725865948Z" level=info msg="RemovePodSandbox for \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\"" Nov 8 00:39:28.726264 containerd[1577]: time="2025-11-08T00:39:28.725893828Z" level=info msg="Forcibly stopping sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\"" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.763 [WARNING][5505] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c4fcf74c-bacb-403a-b9d1-404b70dbc1f8", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"5056e792dd66d2282c9d0af6f1a4e097cf737d9fe31686aa512dc44c12c9e83c", Pod:"goldmane-666569f655-gxx8g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96329b68488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.763 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.763 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" iface="eth0" netns="" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.763 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.763 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.793 [INFO][5512] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.793 [INFO][5512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.793 [INFO][5512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.798 [WARNING][5512] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.798 [INFO][5512] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" HandleID="k8s-pod-network.2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Workload="172--239--57--26-k8s-goldmane--666569f655--gxx8g-eth0" Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.800 [INFO][5512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.804486 containerd[1577]: 2025-11-08 00:39:28.802 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880" Nov 8 00:39:28.805620 containerd[1577]: time="2025-11-08T00:39:28.804767882Z" level=info msg="TearDown network for sandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\" successfully" Nov 8 00:39:28.808343 containerd[1577]: time="2025-11-08T00:39:28.808216586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:28.808343 containerd[1577]: time="2025-11-08T00:39:28.808256856Z" level=info msg="RemovePodSandbox \"2d2bf2c2309f0b72b2b1ef4a1d1b99d21f6e1b3f54faa422460112faa6348880\" returns successfully" Nov 8 00:39:28.808835 containerd[1577]: time="2025-11-08T00:39:28.808808687Z" level=info msg="StopPodSandbox for \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\"" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.843 [WARNING][5526] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be1ef78a-3d23-4e67-a9e4-62513d5dd793", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0", Pod:"coredns-668d6bf9bc-zwjrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b75a135e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.843 [INFO][5526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.843 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" iface="eth0" netns="" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.843 [INFO][5526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.843 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.866 [INFO][5533] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.866 [INFO][5533] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.866 [INFO][5533] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.872 [WARNING][5533] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.872 [INFO][5533] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.873 [INFO][5533] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.878241 containerd[1577]: 2025-11-08 00:39:28.876 [INFO][5526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.880074 containerd[1577]: time="2025-11-08T00:39:28.878211907Z" level=info msg="TearDown network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\" successfully" Nov 8 00:39:28.880074 containerd[1577]: time="2025-11-08T00:39:28.879338478Z" level=info msg="StopPodSandbox for \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\" returns successfully" Nov 8 00:39:28.880074 containerd[1577]: time="2025-11-08T00:39:28.879714469Z" level=info msg="RemovePodSandbox for \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\"" Nov 8 00:39:28.880074 containerd[1577]: time="2025-11-08T00:39:28.879736229Z" level=info msg="Forcibly stopping sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\"" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.913 [WARNING][5547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be1ef78a-3d23-4e67-a9e4-62513d5dd793", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"41b2dd36826b00691d0c06fe8f7e4ce731c6257d3a2653a50459385be98c0cb0", Pod:"coredns-668d6bf9bc-zwjrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b75a135e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.914 [INFO][5547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.914 [INFO][5547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" iface="eth0" netns="" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.914 [INFO][5547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.914 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.935 [INFO][5554] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.935 [INFO][5554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.935 [INFO][5554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.940 [WARNING][5554] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.940 [INFO][5554] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" HandleID="k8s-pod-network.5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--zwjrp-eth0" Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.941 [INFO][5554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:28.945978 containerd[1577]: 2025-11-08 00:39:28.943 [INFO][5547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce" Nov 8 00:39:28.946681 containerd[1577]: time="2025-11-08T00:39:28.946010943Z" level=info msg="TearDown network for sandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\" successfully" Nov 8 00:39:28.952622 containerd[1577]: time="2025-11-08T00:39:28.952595873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:28.952675 containerd[1577]: time="2025-11-08T00:39:28.952658263Z" level=info msg="RemovePodSandbox \"5f2d2eebd22d5e504fd63420e36c6a81de17714aa790931bc0dc7301ac8f85ce\" returns successfully" Nov 8 00:39:28.953413 containerd[1577]: time="2025-11-08T00:39:28.953093134Z" level=info msg="StopPodSandbox for \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\"" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:28.991 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe", Pod:"coredns-668d6bf9bc-sffch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4cff9a3f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:28.991 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:28.991 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" iface="eth0" netns="" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:28.991 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:28.991 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:29.012 [INFO][5576] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:29.012 [INFO][5576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:29.013 [INFO][5576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:29.018 [WARNING][5576] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:29.018 [INFO][5576] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:29.020 [INFO][5576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:29.024669 containerd[1577]: 2025-11-08 00:39:29.022 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.024669 containerd[1577]: time="2025-11-08T00:39:29.024548556Z" level=info msg="TearDown network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\" successfully" Nov 8 00:39:29.024669 containerd[1577]: time="2025-11-08T00:39:29.024569766Z" level=info msg="StopPodSandbox for \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\" returns successfully" Nov 8 00:39:29.025673 containerd[1577]: time="2025-11-08T00:39:29.025220497Z" level=info msg="RemovePodSandbox for \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\"" Nov 8 00:39:29.025673 containerd[1577]: time="2025-11-08T00:39:29.025246627Z" level=info msg="Forcibly stopping sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\"" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.057 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f23c5b5-4cfa-46d8-aaba-cb061e55e03e", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-26", ContainerID:"59aad89e76c3f8f54e3488bc4e9df657831e34aa4205d0705c276c91c952eafe", Pod:"coredns-668d6bf9bc-sffch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4cff9a3f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.057 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.057 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" iface="eth0" netns="" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.057 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.057 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.078 [INFO][5599] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.078 [INFO][5599] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.078 [INFO][5599] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.083 [WARNING][5599] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.083 [INFO][5599] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" HandleID="k8s-pod-network.fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Workload="172--239--57--26-k8s-coredns--668d6bf9bc--sffch-eth0" Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.085 [INFO][5599] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:29.089497 containerd[1577]: 2025-11-08 00:39:29.087 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340" Nov 8 00:39:29.089962 containerd[1577]: time="2025-11-08T00:39:29.089530480Z" level=info msg="TearDown network for sandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\" successfully" Nov 8 00:39:29.093240 containerd[1577]: time="2025-11-08T00:39:29.093212055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:29.093440 containerd[1577]: time="2025-11-08T00:39:29.093416825Z" level=info msg="RemovePodSandbox \"fbc73cdc70d6c794862ec13d8f67dd072284a2a31702074215a784d80dcc4340\" returns successfully" Nov 8 00:39:29.095229 containerd[1577]: time="2025-11-08T00:39:29.095191498Z" level=info msg="StopPodSandbox for \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\"" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.130 [WARNING][5613] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" WorkloadEndpoint="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.130 [INFO][5613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.130 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" iface="eth0" netns="" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.130 [INFO][5613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.130 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.152 [INFO][5621] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.152 [INFO][5621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.152 [INFO][5621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.157 [WARNING][5621] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.157 [INFO][5621] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.158 [INFO][5621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:29.163089 containerd[1577]: 2025-11-08 00:39:29.160 [INFO][5613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.163089 containerd[1577]: time="2025-11-08T00:39:29.163040466Z" level=info msg="TearDown network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\" successfully" Nov 8 00:39:29.163089 containerd[1577]: time="2025-11-08T00:39:29.163066976Z" level=info msg="StopPodSandbox for \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\" returns successfully" Nov 8 00:39:29.164449 containerd[1577]: time="2025-11-08T00:39:29.163742437Z" level=info msg="RemovePodSandbox for \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\"" Nov 8 00:39:29.164449 containerd[1577]: time="2025-11-08T00:39:29.163772457Z" level=info msg="Forcibly stopping sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\"" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.196 [WARNING][5635] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" WorkloadEndpoint="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.196 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.196 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" iface="eth0" netns="" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.196 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.196 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.220 [INFO][5642] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.220 [INFO][5642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.220 [INFO][5642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.225 [WARNING][5642] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.225 [INFO][5642] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" HandleID="k8s-pod-network.1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Workload="172--239--57--26-k8s-whisker--66f57989b9--sfrdn-eth0" Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.226 [INFO][5642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:39:29.232176 containerd[1577]: 2025-11-08 00:39:29.228 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5" Nov 8 00:39:29.232176 containerd[1577]: time="2025-11-08T00:39:29.230772573Z" level=info msg="TearDown network for sandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\" successfully" Nov 8 00:39:29.234043 containerd[1577]: time="2025-11-08T00:39:29.234002757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:39:29.234088 containerd[1577]: time="2025-11-08T00:39:29.234049877Z" level=info msg="RemovePodSandbox \"1024e38b427c6b1e83b9a65feeefd40ea53e42431aa5ae37c101fdf90f8187b5\" returns successfully" Nov 8 00:39:31.994199 kubelet[2675]: E1108 00:39:31.994036 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:39:34.994084 kubelet[2675]: E1108 00:39:34.994015 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:39:35.995981 kubelet[2675]: E1108 00:39:35.995843 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:39:36.994257 kubelet[2675]: E1108 00:39:36.993513 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:39:37.995644 kubelet[2675]: E1108 00:39:37.995584 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:39:38.001006 kubelet[2675]: E1108 00:39:38.000935 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:40.995051 containerd[1577]: time="2025-11-08T00:39:40.993983284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:39:41.133949 containerd[1577]: time="2025-11-08T00:39:41.133875395Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:41.134885 containerd[1577]: time="2025-11-08T00:39:41.134819471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:39:41.134885 containerd[1577]: time="2025-11-08T00:39:41.134847451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:39:41.135263 kubelet[2675]: E1108 00:39:41.134965 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:39:41.135263 kubelet[2675]: E1108 00:39:41.135002 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:39:41.135263 kubelet[2675]: E1108 00:39:41.135106 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:446bc2bb53ee4664a662201ea699a9cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:41.137852 containerd[1577]: time="2025-11-08T00:39:41.137826890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:39:41.262474 containerd[1577]: time="2025-11-08T00:39:41.262356922Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:41.263305 containerd[1577]: time="2025-11-08T00:39:41.263212109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:39:41.263305 containerd[1577]: time="2025-11-08T00:39:41.263286258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:39:41.263442 kubelet[2675]: E1108 00:39:41.263380 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:39:41.263442 kubelet[2675]: E1108 00:39:41.263411 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:39:41.263533 kubelet[2675]: E1108 00:39:41.263483 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:41.264744 kubelet[2675]: E1108 00:39:41.264704 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:39:42.994432 containerd[1577]: time="2025-11-08T00:39:42.994390628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:39:43.147740 containerd[1577]: time="2025-11-08T00:39:43.147609781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:43.149182 containerd[1577]: time="2025-11-08T00:39:43.148856756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:39:43.149182 containerd[1577]: time="2025-11-08T00:39:43.148911956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:39:43.149314 kubelet[2675]: E1108 00:39:43.149277 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:39:43.149802 kubelet[2675]: E1108 00:39:43.149327 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:39:43.149802 kubelet[2675]: E1108 00:39:43.149453 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6j5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b6646fb4-vqzk2_calico-system(39e4cff7-6b76-45e5-9e76-44418507cde4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:43.151776 kubelet[2675]: E1108 00:39:43.151741 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:39:44.992910 kubelet[2675]: E1108 00:39:44.992850 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:45.994951 kubelet[2675]: E1108 00:39:45.994893 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:47.998549 containerd[1577]: time="2025-11-08T00:39:47.998444479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:39:48.135158 containerd[1577]: time="2025-11-08T00:39:48.135032773Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:48.136471 containerd[1577]: time="2025-11-08T00:39:48.136392319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:39:48.136580 containerd[1577]: time="2025-11-08T00:39:48.136506859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:39:48.136813 kubelet[2675]: E1108 00:39:48.136756 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:39:48.136813 kubelet[2675]: E1108 00:39:48.136806 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:39:48.138565 kubelet[2675]: E1108 00:39:48.136998 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:48.138882 containerd[1577]: time="2025-11-08T00:39:48.138000374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:39:48.269677 containerd[1577]: time="2025-11-08T00:39:48.269436794Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:48.270934 containerd[1577]: time="2025-11-08T00:39:48.270875959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:39:48.271156 containerd[1577]: time="2025-11-08T00:39:48.270991319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:48.271242 kubelet[2675]: E1108 00:39:48.271194 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:39:48.271356 kubelet[2675]: E1108 00:39:48.271247 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:39:48.271536 kubelet[2675]: E1108 00:39:48.271457 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtp5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxx8g_calico-system(c4fcf74c-bacb-403a-b9d1-404b70dbc1f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:48.272000 containerd[1577]: time="2025-11-08T00:39:48.271961246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:39:48.274404 kubelet[2675]: E1108 00:39:48.273533 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:39:48.401856 containerd[1577]: time="2025-11-08T00:39:48.401775930Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:48.402639 containerd[1577]: time="2025-11-08T00:39:48.402544667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:39:48.402639 containerd[1577]: time="2025-11-08T00:39:48.402591177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:39:48.402830 kubelet[2675]: E1108 00:39:48.402770 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:39:48.402944 kubelet[2675]: E1108 00:39:48.402835 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:39:48.403377 kubelet[2675]: E1108 00:39:48.402995 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:48.404852 kubelet[2675]: E1108 00:39:48.404646 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:39:49.000180 containerd[1577]: time="2025-11-08T00:39:48.997832777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:39:49.146681 containerd[1577]: time="2025-11-08T00:39:49.146473732Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:49.147749 containerd[1577]: time="2025-11-08T00:39:49.147622499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:39:49.147749 containerd[1577]: time="2025-11-08T00:39:49.147697119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:49.148579 kubelet[2675]: E1108 00:39:49.148048 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:49.148579 kubelet[2675]: E1108 00:39:49.148101 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:49.148579 kubelet[2675]: E1108 00:39:49.148243 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5mcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-fj7f9_calico-apiserver(945f0c5d-79d5-427e-a435-dd67b16eeed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:49.149725 kubelet[2675]: E1108 00:39:49.149650 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:39:49.996309 containerd[1577]: time="2025-11-08T00:39:49.995736347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:39:50.133466 containerd[1577]: time="2025-11-08T00:39:50.133236660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:39:50.134420 containerd[1577]: time="2025-11-08T00:39:50.134310187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:39:50.134732 containerd[1577]: time="2025-11-08T00:39:50.134484326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:39:50.134918 kubelet[2675]: E1108 00:39:50.134789 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:50.134998 kubelet[2675]: E1108 00:39:50.134923 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:39:50.135196 kubelet[2675]: E1108 00:39:50.135105 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqxqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-qzjh9_calico-apiserver(548b6544-42df-4869-bfa7-bb27245d2cb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:39:50.136791 kubelet[2675]: E1108 00:39:50.136680 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:39:53.007169 kubelet[2675]: E1108 00:39:53.005033 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:39:56.992942 kubelet[2675]: E1108 00:39:56.992774 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:39:57.996111 kubelet[2675]: E1108 00:39:57.995749 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:40:02.996899 kubelet[2675]: E1108 00:40:02.996404 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:40:03.008158 kubelet[2675]: E1108 00:40:03.000818 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:40:03.998301 kubelet[2675]: E1108 00:40:03.998195 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:40:04.002047 kubelet[2675]: E1108 00:40:04.002005 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:40:04.002248 kubelet[2675]: E1108 00:40:04.002099 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:40:05.995235 kubelet[2675]: E1108 00:40:05.994991 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:40:12.001180 kubelet[2675]: E1108 00:40:12.000965 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:40:14.995007 kubelet[2675]: E1108 00:40:14.994971 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:40:14.995679 kubelet[2675]: E1108 00:40:14.995609 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:40:15.998874 kubelet[2675]: E1108 00:40:15.998674 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:40:16.003200 kubelet[2675]: E1108 00:40:16.003158 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:40:16.992885 kubelet[2675]: E1108 00:40:16.992840 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:40:17.998520 kubelet[2675]: E1108 00:40:17.998222 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:40:24.994198 containerd[1577]: time="2025-11-08T00:40:24.994015891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:40:25.313073 containerd[1577]: time="2025-11-08T00:40:25.312739946Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:25.313965 containerd[1577]: time="2025-11-08T00:40:25.313774015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:40:25.313965 containerd[1577]: time="2025-11-08T00:40:25.313872975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:40:25.314083 kubelet[2675]: E1108 00:40:25.314020 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:40:25.314083 kubelet[2675]: E1108 00:40:25.314067 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:40:25.315107 kubelet[2675]: E1108 00:40:25.314274 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6j5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b6646fb4-vqzk2_calico-system(39e4cff7-6b76-45e5-9e76-44418507cde4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:25.316378 kubelet[2675]: E1108 00:40:25.316288 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:40:25.999729 containerd[1577]: time="2025-11-08T00:40:25.999230058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:40:26.166511 containerd[1577]: time="2025-11-08T00:40:26.166393154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:26.168025 containerd[1577]: time="2025-11-08T00:40:26.167946063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:40:26.168701 containerd[1577]: time="2025-11-08T00:40:26.168034804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:40:26.168745 kubelet[2675]: E1108 00:40:26.168223 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:40:26.169226 kubelet[2675]: E1108 00:40:26.168297 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:40:26.171184 kubelet[2675]: E1108 00:40:26.169741 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:446bc2bb53ee4664a662201ea699a9cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:26.173027 containerd[1577]: time="2025-11-08T00:40:26.172951652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:40:26.313167 containerd[1577]: time="2025-11-08T00:40:26.311254247Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:26.314023 containerd[1577]: time="2025-11-08T00:40:26.313919006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:40:26.314023 containerd[1577]: time="2025-11-08T00:40:26.313991326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:40:26.314374 kubelet[2675]: E1108 00:40:26.314319 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:40:26.314745 kubelet[2675]: E1108 00:40:26.314407 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:40:26.316303 kubelet[2675]: E1108 00:40:26.314794 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5945b5bfd9-lpcq2_calico-system(12f92cbb-00df-467b-a39b-79b1d77d20a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:26.317250 kubelet[2675]: E1108 00:40:26.317204 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:40:26.994693 kubelet[2675]: E1108 00:40:26.994635 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:40:27.997161 kubelet[2675]: E1108 00:40:27.996082 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:40:28.993824 kubelet[2675]: E1108 00:40:28.993684 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:40:29.995784 kubelet[2675]: E1108 00:40:29.995702 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:40:32.995503 containerd[1577]: time="2025-11-08T00:40:32.995382566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:40:33.138928 containerd[1577]: time="2025-11-08T00:40:33.138878291Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:33.140930 containerd[1577]: time="2025-11-08T00:40:33.140877271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:33.141021 containerd[1577]: time="2025-11-08T00:40:33.140987591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:40:33.141449 kubelet[2675]: E1108 00:40:33.141359 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:40:33.141872 kubelet[2675]: E1108 00:40:33.141460 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:40:33.142232 kubelet[2675]: E1108 00:40:33.141968 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtp5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxx8g_calico-system(c4fcf74c-bacb-403a-b9d1-404b70dbc1f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:33.143528 kubelet[2675]: E1108 00:40:33.143439 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:40:36.546488 systemd[1]: Started sshd@7-172.239.57.26:22-147.75.109.163:51440.service - OpenSSH per-connection server daemon (147.75.109.163:51440). Nov 8 00:40:36.895379 sshd[5722]: Accepted publickey for core from 147.75.109.163 port 51440 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:36.899999 sshd[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:36.916785 systemd-logind[1553]: New session 8 of user core. Nov 8 00:40:36.922852 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:40:37.299035 sshd[5722]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:37.305123 systemd[1]: sshd@7-172.239.57.26:22-147.75.109.163:51440.service: Deactivated successfully. Nov 8 00:40:37.317421 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:40:37.322890 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:40:37.326991 systemd-logind[1553]: Removed session 8. Nov 8 00:40:37.923297 systemd[1]: run-containerd-runc-k8s.io-f1749a7a08cb27c4c7ae052de1c7290d5317044ce3979c7386975e4123bbadb8-runc.soz9SP.mount: Deactivated successfully. Nov 8 00:40:38.995084 kubelet[2675]: E1108 00:40:38.994757 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:40:40.998461 containerd[1577]: time="2025-11-08T00:40:40.998397064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:40:41.000318 kubelet[2675]: E1108 00:40:41.000191 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:40:41.134529 containerd[1577]: time="2025-11-08T00:40:41.134385108Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:41.135668 containerd[1577]: time="2025-11-08T00:40:41.135504997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:40:41.135668 containerd[1577]: time="2025-11-08T00:40:41.135576557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:41.135748 kubelet[2675]: E1108 00:40:41.135685 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:41.135748 kubelet[2675]: E1108 00:40:41.135724 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:41.135945 kubelet[2675]: E1108 00:40:41.135890 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5mcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-fj7f9_calico-apiserver(945f0c5d-79d5-427e-a435-dd67b16eeed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:41.137275 containerd[1577]: time="2025-11-08T00:40:41.137051198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:40:41.137336 kubelet[2675]: E1108 00:40:41.137168 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:40:41.275714 containerd[1577]: time="2025-11-08T00:40:41.275544582Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:41.276568 containerd[1577]: time="2025-11-08T00:40:41.276467563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:40:41.276682 containerd[1577]: time="2025-11-08T00:40:41.276555163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:41.276772 kubelet[2675]: E1108 00:40:41.276686 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:41.276772 kubelet[2675]: E1108 00:40:41.276727 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:41.276886 kubelet[2675]: E1108 00:40:41.276834 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqxqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69649455c-qzjh9_calico-apiserver(548b6544-42df-4869-bfa7-bb27245d2cb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:41.278229 kubelet[2675]: E1108 00:40:41.278172 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:40:41.997423 kubelet[2675]: E1108 00:40:41.994050 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:40:41.999714 containerd[1577]: time="2025-11-08T00:40:41.999344049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:40:42.130735 containerd[1577]: time="2025-11-08T00:40:42.130481925Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:42.131756 containerd[1577]: time="2025-11-08T00:40:42.131620175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:40:42.131756 containerd[1577]: time="2025-11-08T00:40:42.131715565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:40:42.132216 kubelet[2675]: E1108 00:40:42.131934 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:40:42.132216 kubelet[2675]: E1108 00:40:42.131976 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:40:42.132216 kubelet[2675]: E1108 00:40:42.132065 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:42.136498 containerd[1577]: time="2025-11-08T00:40:42.135018115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:40:42.265542 containerd[1577]: time="2025-11-08T00:40:42.264634202Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:42.266318 containerd[1577]: time="2025-11-08T00:40:42.265843733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:40:42.266441 containerd[1577]: time="2025-11-08T00:40:42.266320772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:40:42.267007 kubelet[2675]: E1108 00:40:42.266690 2675 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:40:42.267007 kubelet[2675]: E1108 00:40:42.266774 2675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:40:42.267007 kubelet[2675]: E1108 00:40:42.266944 2675 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zf6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mdrsj_calico-system(3e31263c-8cf9-4e4b-a04e-7c52af3f73c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:42.268368 kubelet[2675]: E1108 00:40:42.268317 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:40:42.356378 systemd[1]: Started sshd@8-172.239.57.26:22-147.75.109.163:49598.service - OpenSSH per-connection server daemon (147.75.109.163:49598). Nov 8 00:40:42.688476 sshd[5767]: Accepted publickey for core from 147.75.109.163 port 49598 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:42.690853 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:42.698235 systemd-logind[1553]: New session 9 of user core. Nov 8 00:40:42.700401 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:40:43.002950 sshd[5767]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:43.015377 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:40:43.016625 systemd[1]: sshd@8-172.239.57.26:22-147.75.109.163:49598.service: Deactivated successfully. Nov 8 00:40:43.027152 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:40:43.030628 systemd-logind[1553]: Removed session 9. Nov 8 00:40:48.003330 kubelet[2675]: E1108 00:40:48.003280 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:40:48.072633 systemd[1]: Started sshd@9-172.239.57.26:22-147.75.109.163:49610.service - OpenSSH per-connection server daemon (147.75.109.163:49610). Nov 8 00:40:48.450083 sshd[5782]: Accepted publickey for core from 147.75.109.163 port 49610 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:48.452886 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:48.460747 systemd-logind[1553]: New session 10 of user core. Nov 8 00:40:48.470412 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:40:48.914101 sshd[5782]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:48.920123 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:40:48.921118 systemd[1]: sshd@9-172.239.57.26:22-147.75.109.163:49610.service: Deactivated successfully. Nov 8 00:40:48.925622 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:40:48.927485 systemd-logind[1553]: Removed session 10. Nov 8 00:40:48.972484 systemd[1]: Started sshd@10-172.239.57.26:22-147.75.109.163:49622.service - OpenSSH per-connection server daemon (147.75.109.163:49622). Nov 8 00:40:49.311698 sshd[5801]: Accepted publickey for core from 147.75.109.163 port 49622 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:49.313516 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:49.318109 systemd-logind[1553]: New session 11 of user core. Nov 8 00:40:49.326415 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:40:49.786365 sshd[5801]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:49.790460 systemd[1]: sshd@10-172.239.57.26:22-147.75.109.163:49622.service: Deactivated successfully. Nov 8 00:40:49.796386 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:40:49.797013 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:40:49.800109 systemd-logind[1553]: Removed session 11. Nov 8 00:40:49.842208 systemd[1]: Started sshd@11-172.239.57.26:22-147.75.109.163:49634.service - OpenSSH per-connection server daemon (147.75.109.163:49634). Nov 8 00:40:50.180787 sshd[5813]: Accepted publickey for core from 147.75.109.163 port 49634 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:50.186503 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:50.193690 systemd-logind[1553]: New session 12 of user core. Nov 8 00:40:50.198558 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:40:50.519225 sshd[5813]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:50.522873 systemd[1]: sshd@11-172.239.57.26:22-147.75.109.163:49634.service: Deactivated successfully. Nov 8 00:40:50.529150 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:40:50.529886 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:40:50.532404 systemd-logind[1553]: Removed session 12. Nov 8 00:40:50.994165 kubelet[2675]: E1108 00:40:50.993070 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:40:51.999314 kubelet[2675]: E1108 00:40:51.999268 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:40:52.001157 kubelet[2675]: E1108 00:40:51.999961 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:40:52.003246 kubelet[2675]: E1108 00:40:52.003169 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:40:52.995478 kubelet[2675]: E1108 00:40:52.995418 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:40:53.996163 kubelet[2675]: E1108 00:40:53.995297 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:40:55.579961 systemd[1]: Started sshd@12-172.239.57.26:22-147.75.109.163:42040.service - OpenSSH per-connection server daemon (147.75.109.163:42040). Nov 8 00:40:55.922066 sshd[5847]: Accepted publickey for core from 147.75.109.163 port 42040 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:55.926730 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:55.937273 systemd-logind[1553]: New session 13 of user core. Nov 8 00:40:55.942579 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:40:56.275868 sshd[5847]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:56.282060 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:40:56.285160 systemd[1]: sshd@12-172.239.57.26:22-147.75.109.163:42040.service: Deactivated successfully. Nov 8 00:40:56.290906 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:40:56.295010 systemd-logind[1553]: Removed session 13. Nov 8 00:40:56.346540 systemd[1]: Started sshd@13-172.239.57.26:22-147.75.109.163:42044.service - OpenSSH per-connection server daemon (147.75.109.163:42044). Nov 8 00:40:56.715051 sshd[5861]: Accepted publickey for core from 147.75.109.163 port 42044 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:56.719493 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:56.727754 systemd-logind[1553]: New session 14 of user core. Nov 8 00:40:56.732610 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:40:57.432320 sshd[5861]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:57.435602 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:40:57.437256 systemd[1]: sshd@13-172.239.57.26:22-147.75.109.163:42044.service: Deactivated successfully. Nov 8 00:40:57.447631 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:40:57.449478 systemd-logind[1553]: Removed session 14. Nov 8 00:40:57.488348 systemd[1]: Started sshd@14-172.239.57.26:22-147.75.109.163:42046.service - OpenSSH per-connection server daemon (147.75.109.163:42046). Nov 8 00:40:57.846241 sshd[5873]: Accepted publickey for core from 147.75.109.163 port 42046 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:57.846173 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:57.853749 systemd-logind[1553]: New session 15 of user core. Nov 8 00:40:57.860459 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:40:58.735690 sshd[5873]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:58.743962 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:40:58.746581 systemd[1]: sshd@14-172.239.57.26:22-147.75.109.163:42046.service: Deactivated successfully. Nov 8 00:40:58.758261 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:40:58.760754 systemd-logind[1553]: Removed session 15. Nov 8 00:40:58.801258 systemd[1]: Started sshd@15-172.239.57.26:22-147.75.109.163:42050.service - OpenSSH per-connection server daemon (147.75.109.163:42050). Nov 8 00:40:59.167602 sshd[5894]: Accepted publickey for core from 147.75.109.163 port 42050 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:40:59.169949 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:40:59.179055 systemd-logind[1553]: New session 16 of user core. Nov 8 00:40:59.182468 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:40:59.807368 sshd[5894]: pam_unix(sshd:session): session closed for user core Nov 8 00:40:59.814671 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:40:59.815510 systemd[1]: sshd@15-172.239.57.26:22-147.75.109.163:42050.service: Deactivated successfully. Nov 8 00:40:59.823034 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:40:59.824991 systemd-logind[1553]: Removed session 16. Nov 8 00:40:59.865927 systemd[1]: Started sshd@16-172.239.57.26:22-147.75.109.163:42058.service - OpenSSH per-connection server daemon (147.75.109.163:42058). Nov 8 00:40:59.996177 kubelet[2675]: E1108 00:40:59.995112 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:41:00.206272 sshd[5907]: Accepted publickey for core from 147.75.109.163 port 42058 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:41:00.208453 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:00.215731 systemd-logind[1553]: New session 17 of user core. Nov 8 00:41:00.225595 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:41:00.530176 sshd[5907]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:00.535743 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:41:00.536728 systemd[1]: sshd@16-172.239.57.26:22-147.75.109.163:42058.service: Deactivated successfully. Nov 8 00:41:00.549786 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:41:00.552255 systemd-logind[1553]: Removed session 17. Nov 8 00:41:02.997749 kubelet[2675]: E1108 00:41:02.997456 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:41:03.996074 kubelet[2675]: E1108 00:41:03.995077 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:41:03.999667 kubelet[2675]: E1108 00:41:03.999580 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:41:05.594253 systemd[1]: Started sshd@17-172.239.57.26:22-147.75.109.163:34998.service - OpenSSH per-connection server daemon (147.75.109.163:34998). Nov 8 00:41:05.950165 sshd[5923]: Accepted publickey for core from 147.75.109.163 port 34998 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:41:05.951899 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:05.961817 systemd-logind[1553]: New session 18 of user core. Nov 8 00:41:05.971420 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:41:05.995344 kubelet[2675]: E1108 00:41:05.995252 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:41:06.298473 sshd[5923]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:06.307800 systemd[1]: sshd@17-172.239.57.26:22-147.75.109.163:34998.service: Deactivated successfully. Nov 8 00:41:06.309792 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:41:06.314950 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:41:06.317885 systemd-logind[1553]: Removed session 18. Nov 8 00:41:06.994944 kubelet[2675]: E1108 00:41:06.994690 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:41:06.994944 kubelet[2675]: E1108 00:41:06.994866 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:41:06.999546 kubelet[2675]: E1108 00:41:06.999505 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:41:09.995660 kubelet[2675]: E1108 00:41:09.995295 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:41:10.995234 kubelet[2675]: E1108 00:41:10.994568 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:41:11.365451 systemd[1]: Started sshd@18-172.239.57.26:22-147.75.109.163:53100.service - OpenSSH per-connection server daemon (147.75.109.163:53100). Nov 8 00:41:11.709849 sshd[5962]: Accepted publickey for core from 147.75.109.163 port 53100 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:41:11.714273 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:11.720221 systemd-logind[1553]: New session 19 of user core. Nov 8 00:41:11.724450 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:41:12.114465 sshd[5962]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:12.120834 systemd[1]: sshd@18-172.239.57.26:22-147.75.109.163:53100.service: Deactivated successfully. Nov 8 00:41:12.121083 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:41:12.128120 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:41:12.129794 systemd-logind[1553]: Removed session 19. Nov 8 00:41:15.997820 kubelet[2675]: E1108 00:41:15.997387 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:41:16.996824 kubelet[2675]: E1108 00:41:16.996441 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:41:17.175672 systemd[1]: Started sshd@19-172.239.57.26:22-147.75.109.163:53102.service - OpenSSH per-connection server daemon (147.75.109.163:53102). Nov 8 00:41:17.536283 sshd[5977]: Accepted publickey for core from 147.75.109.163 port 53102 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:41:17.538070 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:17.543114 systemd-logind[1553]: New session 20 of user core. Nov 8 00:41:17.550541 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:41:17.858014 sshd[5977]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:17.866238 systemd[1]: sshd@19-172.239.57.26:22-147.75.109.163:53102.service: Deactivated successfully. Nov 8 00:41:17.876769 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:41:17.880895 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:41:17.882313 systemd-logind[1553]: Removed session 20. Nov 8 00:41:18.000296 kubelet[2675]: E1108 00:41:18.000203 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:41:18.993262 kubelet[2675]: E1108 00:41:18.993188 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:41:20.996338 kubelet[2675]: E1108 00:41:20.996268 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:41:22.918547 systemd[1]: Started sshd@20-172.239.57.26:22-147.75.109.163:42548.service - OpenSSH per-connection server daemon (147.75.109.163:42548). Nov 8 00:41:22.996094 kubelet[2675]: E1108 00:41:22.995414 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:41:23.262817 sshd[5991]: Accepted publickey for core from 147.75.109.163 port 42548 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:41:23.265803 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:23.271920 systemd-logind[1553]: New session 21 of user core. Nov 8 00:41:23.278524 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:41:23.620885 sshd[5991]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:23.627479 systemd[1]: sshd@20-172.239.57.26:22-147.75.109.163:42548.service: Deactivated successfully. Nov 8 00:41:23.636617 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:41:23.636734 systemd-logind[1553]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:41:23.641038 systemd-logind[1553]: Removed session 21. Nov 8 00:41:25.993479 kubelet[2675]: E1108 00:41:25.993268 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Nov 8 00:41:28.681409 systemd[1]: Started sshd@21-172.239.57.26:22-147.75.109.163:42550.service - OpenSSH per-connection server daemon (147.75.109.163:42550). Nov 8 00:41:29.025366 sshd[6007]: Accepted publickey for core from 147.75.109.163 port 42550 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:41:29.026907 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:29.032547 systemd-logind[1553]: New session 22 of user core. Nov 8 00:41:29.037541 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:41:29.363343 sshd[6007]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:29.369964 systemd[1]: sshd@21-172.239.57.26:22-147.75.109.163:42550.service: Deactivated successfully. Nov 8 00:41:29.376682 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:41:29.379022 systemd-logind[1553]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:41:29.380469 systemd-logind[1553]: Removed session 22. Nov 8 00:41:30.993829 kubelet[2675]: E1108 00:41:30.993543 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-qzjh9" podUID="548b6544-42df-4869-bfa7-bb27245d2cb1" Nov 8 00:41:30.997539 kubelet[2675]: E1108 00:41:30.997435 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69649455c-fj7f9" podUID="945f0c5d-79d5-427e-a435-dd67b16eeed0" Nov 8 00:41:31.996986 kubelet[2675]: E1108 00:41:31.996778 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b6646fb4-vqzk2" podUID="39e4cff7-6b76-45e5-9e76-44418507cde4" Nov 8 00:41:32.000046 kubelet[2675]: E1108 00:41:31.999930 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5945b5bfd9-lpcq2" podUID="12f92cbb-00df-467b-a39b-79b1d77d20a1" Nov 8 00:41:32.996013 kubelet[2675]: E1108 00:41:32.995949 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mdrsj" podUID="3e31263c-8cf9-4e4b-a04e-7c52af3f73c1" Nov 8 00:41:33.995968 kubelet[2675]: E1108 00:41:33.994849 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxx8g" podUID="c4fcf74c-bacb-403a-b9d1-404b70dbc1f8" Nov 8 00:41:34.422915 systemd[1]: Started sshd@22-172.239.57.26:22-147.75.109.163:36132.service - OpenSSH per-connection server daemon (147.75.109.163:36132). Nov 8 00:41:34.767269 sshd[6021]: Accepted publickey for core from 147.75.109.163 port 36132 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:41:34.770094 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:34.783179 systemd-logind[1553]: New session 23 of user core. Nov 8 00:41:34.786425 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:41:35.121480 sshd[6021]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:35.127440 systemd-logind[1553]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:41:35.130245 systemd[1]: sshd@22-172.239.57.26:22-147.75.109.163:36132.service: Deactivated successfully. Nov 8 00:41:35.137953 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:41:35.139266 systemd-logind[1553]: Removed session 23. Nov 8 00:41:37.995067 kubelet[2675]: E1108 00:41:37.995013 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15"