Nov 8 00:35:56.982202 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:35:56.982224 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:56.982232 kernel: BIOS-provided physical RAM map: Nov 8 00:35:56.982238 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 8 00:35:56.982243 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 8 00:35:56.982253 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:35:56.982259 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 8 00:35:56.982265 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 8 00:35:56.982271 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:35:56.982276 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:35:56.982282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:35:56.982287 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:35:56.982293 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 8 00:35:56.982301 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:35:56.982308 kernel: NX (Execute Disable) protection: active Nov 8 00:35:56.982314 kernel: APIC: Static calls initialized Nov 8 00:35:56.982320 kernel: SMBIOS 2.8 present. Nov 8 00:35:56.982326 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 8 00:35:56.982332 kernel: Hypervisor detected: KVM Nov 8 00:35:56.982341 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:35:56.982347 kernel: kvm-clock: using sched offset of 5583827632 cycles Nov 8 00:35:56.982353 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:35:56.982359 kernel: tsc: Detected 1999.997 MHz processor Nov 8 00:35:56.982366 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:35:56.982373 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:35:56.982379 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 8 00:35:56.982385 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:35:56.982391 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:35:56.982399 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 8 00:35:56.982406 kernel: Using GB pages for direct mapping Nov 8 00:35:56.982412 kernel: ACPI: Early table checksum verification disabled Nov 8 00:35:56.982418 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 8 00:35:56.982424 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982430 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982436 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982442 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:35:56.982448 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982457 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982463 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982469 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982479 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 8 00:35:56.982485 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 8 00:35:56.982492 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:35:56.982500 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 8 00:35:56.982507 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 8 00:35:56.982513 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 8 00:35:56.982520 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 8 00:35:56.982526 kernel: No NUMA configuration found Nov 8 00:35:56.982532 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 8 00:35:56.982539 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Nov 8 00:35:56.982545 kernel: Zone ranges: Nov 8 00:35:56.982554 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:35:56.982560 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:35:56.982566 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:35:56.982572 kernel: Movable zone start for each node Nov 8 00:35:56.982579 kernel: Early memory node ranges Nov 8 00:35:56.982585 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:35:56.982591 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 8 00:35:56.982598 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:35:56.982604 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 8 00:35:56.982610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:35:56.982619 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:35:56.982625 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 8 00:35:56.982632 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:35:56.982638 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:35:56.982645 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:35:56.982651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:35:56.982657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:35:56.982664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:35:56.982670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:35:56.982679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:35:56.982685 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:35:56.982692 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:35:56.982698 kernel: TSC deadline timer available Nov 8 00:35:56.982705 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:35:56.982711 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:35:56.982717 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:35:56.982723 kernel: kvm-guest: setup PV sched yield Nov 8 00:35:56.982730 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:35:56.982739 kernel: Booting paravirtualized kernel on KVM Nov 8 00:35:56.982747 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:35:56.982753 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:35:56.982760 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:35:56.982766 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:35:56.982773 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:35:56.982779 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:35:56.982785 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:35:56.982792 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:56.982802 kernel: random: crng init done Nov 8 00:35:56.982808 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:35:56.982814 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:35:56.982821 kernel: Fallback order for Node 0: 0 Nov 8 00:35:56.982827 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 8 00:35:56.982834 kernel: Policy zone: Normal Nov 8 00:35:56.982840 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:35:56.982847 kernel: software IO TLB: area num 2. Nov 8 00:35:56.982856 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 227300K reserved, 0K cma-reserved) Nov 8 00:35:56.982863 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:35:56.982870 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:35:56.982876 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:35:56.982882 kernel: Dynamic Preempt: voluntary Nov 8 00:35:56.982889 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:35:56.982896 kernel: rcu: RCU event tracing is enabled. Nov 8 00:35:56.982903 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:35:56.982910 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:35:56.982919 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:35:56.982925 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:35:56.982932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:35:56.982939 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:35:56.982945 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:35:56.982952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:35:56.982958 kernel: Console: colour VGA+ 80x25 Nov 8 00:35:56.982965 kernel: printk: console [tty0] enabled Nov 8 00:35:56.982971 kernel: printk: console [ttyS0] enabled Nov 8 00:35:56.982981 kernel: ACPI: Core revision 20230628 Nov 8 00:35:56.982987 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:35:56.982994 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:35:56.983000 kernel: x2apic enabled Nov 8 00:35:56.983017 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:35:56.983026 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:35:56.983033 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:35:56.983040 kernel: kvm-guest: setup PV IPIs Nov 8 00:35:56.983046 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:35:56.983053 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:35:56.983060 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 8 00:35:56.983067 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:35:56.983076 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:35:56.983082 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:35:56.983103 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:35:56.983125 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:35:56.983132 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:35:56.983143 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:35:56.983150 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:35:56.983157 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:35:56.983164 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:35:56.983171 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:35:56.983178 kernel: active return thunk: srso_alias_return_thunk Nov 8 00:35:56.983185 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:35:56.983191 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 8 00:35:56.983201 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:35:56.983208 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:35:56.983215 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:35:56.983221 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:35:56.983228 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:35:56.983235 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:35:56.983241 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 8 00:35:56.983249 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 8 00:35:56.983256 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:35:56.983265 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:35:56.983271 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:35:56.983278 kernel: landlock: Up and running. Nov 8 00:35:56.983285 kernel: SELinux: Initializing. Nov 8 00:35:56.983292 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.983298 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.983305 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 8 00:35:56.983312 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:56.983319 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:56.983328 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:56.983335 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:35:56.983341 kernel: ... version: 0 Nov 8 00:35:56.983348 kernel: ... bit width: 48 Nov 8 00:35:56.983355 kernel: ... generic registers: 6 Nov 8 00:35:56.983362 kernel: ... value mask: 0000ffffffffffff Nov 8 00:35:56.983369 kernel: ... max period: 00007fffffffffff Nov 8 00:35:56.983375 kernel: ... fixed-purpose events: 0 Nov 8 00:35:56.983382 kernel: ... event mask: 000000000000003f Nov 8 00:35:56.983391 kernel: signal: max sigframe size: 3376 Nov 8 00:35:56.983398 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:35:56.983405 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:35:56.983411 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:35:56.983418 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:35:56.983424 kernel: .... node #0, CPUs: #1 Nov 8 00:35:56.983431 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:35:56.983438 kernel: smpboot: Max logical packages: 1 Nov 8 00:35:56.983444 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 8 00:35:56.983453 kernel: devtmpfs: initialized Nov 8 00:35:56.983460 kernel: x86/mm: Memory block size: 128MB Nov 8 00:35:56.983467 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:35:56.983474 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:35:56.983481 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:35:56.983487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:35:56.983494 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:35:56.983502 kernel: audit: type=2000 audit(1762562155.970:1): state=initialized audit_enabled=0 res=1 Nov 8 00:35:56.983508 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:35:56.983517 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:35:56.983524 kernel: cpuidle: using governor menu Nov 8 00:35:56.983531 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:35:56.983537 kernel: dca service started, version 1.12.1 Nov 8 00:35:56.983544 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:35:56.983551 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:35:56.983557 kernel: PCI: Using configuration type 1 for base access Nov 8 00:35:56.983564 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:35:56.983571 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:35:56.983580 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:35:56.983587 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:35:56.983594 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:35:56.983601 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:35:56.983607 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:35:56.983615 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:35:56.983621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:35:56.983628 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:35:56.983635 kernel: ACPI: Interpreter enabled Nov 8 00:35:56.983644 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:35:56.983651 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:35:56.983658 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:35:56.983664 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:35:56.983671 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:35:56.983678 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:35:56.983861 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:35:56.984002 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:35:56.984174 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:35:56.984185 kernel: PCI host bridge to bus 0000:00 Nov 8 00:35:56.984323 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:35:56.984465 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:35:56.984598 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:35:56.984719 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 00:35:56.984836 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:35:56.984973 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 8 00:35:56.985089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:35:56.985258 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:35:56.985395 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:35:56.985521 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:35:56.985645 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:35:56.985780 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:35:56.985906 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:35:56.986041 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:35:56.986197 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:35:56.986326 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:35:56.986450 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:35:56.986583 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:35:56.986716 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:35:56.986840 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:35:56.986964 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:35:56.987090 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:35:56.989280 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:35:56.989418 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:35:56.989553 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:35:56.989688 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 8 00:35:56.989812 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:35:56.989945 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:35:56.990071 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:35:56.990080 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:35:56.990087 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:35:56.990094 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:35:56.990104 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:35:56.990127 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:35:56.990134 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:35:56.990141 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:35:56.990148 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:35:56.990154 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:35:56.990161 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:35:56.990168 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:35:56.990174 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:35:56.990185 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:35:56.990192 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:35:56.990198 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:35:56.990205 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:35:56.990212 kernel: iommu: Default domain type: Translated Nov 8 00:35:56.990219 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:35:56.990226 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:35:56.990233 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:35:56.990240 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 8 00:35:56.990250 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 8 00:35:56.990382 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:35:56.990508 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:35:56.992232 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:35:56.992246 kernel: vgaarb: loaded Nov 8 00:35:56.992254 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:35:56.992261 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:35:56.992268 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:35:56.992279 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:35:56.992286 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:35:56.992292 kernel: pnp: PnP ACPI init Nov 8 00:35:56.992441 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:35:56.992452 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:35:56.992459 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:35:56.992465 kernel: NET: Registered PF_INET protocol family Nov 8 00:35:56.992473 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:35:56.992483 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:35:56.992490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:35:56.992497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:35:56.992504 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:35:56.992510 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:35:56.992517 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.992524 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.992531 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:35:56.992537 kernel: NET: Registered PF_XDP protocol family Nov 8 00:35:56.992658 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:35:56.992774 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:35:56.992891 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:35:56.993007 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 00:35:56.993863 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:35:56.993990 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 8 00:35:56.994000 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:35:56.994008 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:35:56.994019 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 8 00:35:56.994026 kernel: Initialise system trusted keyrings Nov 8 00:35:56.994034 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:35:56.994041 kernel: Key type asymmetric registered Nov 8 00:35:56.994048 kernel: Asymmetric key parser 'x509' registered Nov 8 00:35:56.994055 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:35:56.994062 kernel: io scheduler mq-deadline registered Nov 8 00:35:56.994068 kernel: io scheduler kyber registered Nov 8 00:35:56.994075 kernel: io scheduler bfq registered Nov 8 00:35:56.994081 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:35:56.994091 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:35:56.994098 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:35:56.994270 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:35:56.994279 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:35:56.994286 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:35:56.994292 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:35:56.994299 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:35:56.994437 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:35:56.994452 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:35:56.994571 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:35:56.994688 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:35:56 UTC (1762562156) Nov 8 00:35:56.994807 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:35:56.994821 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:35:56.994828 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:35:56.994834 kernel: Segment Routing with IPv6 Nov 8 00:35:56.994841 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:35:56.994852 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:35:56.994859 kernel: Key type dns_resolver registered Nov 8 00:35:56.994866 kernel: IPI shorthand broadcast: enabled Nov 8 00:35:56.994873 kernel: sched_clock: Marking stable (851004688, 309200964)->(1285218847, -125013195) Nov 8 00:35:56.994879 kernel: registered taskstats version 1 Nov 8 00:35:56.994886 kernel: Loading compiled-in X.509 certificates Nov 8 00:35:56.994893 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:35:56.994900 kernel: Key type .fscrypt registered Nov 8 00:35:56.994906 kernel: Key type fscrypt-provisioning registered Nov 8 00:35:56.994915 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:35:56.994922 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:35:56.994929 kernel: ima: No architecture policies found Nov 8 00:35:56.994936 kernel: clk: Disabling unused clocks Nov 8 00:35:56.994943 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:35:56.994950 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:35:56.994956 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:35:56.994963 kernel: Run /init as init process Nov 8 00:35:56.994970 kernel: with arguments: Nov 8 00:35:56.994979 kernel: /init Nov 8 00:35:56.994986 kernel: with environment: Nov 8 00:35:56.994992 kernel: HOME=/ Nov 8 00:35:56.994999 kernel: TERM=linux Nov 8 00:35:56.995008 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:35:56.995017 systemd[1]: Detected virtualization kvm. Nov 8 00:35:56.995025 systemd[1]: Detected architecture x86-64. Nov 8 00:35:56.995032 systemd[1]: Running in initrd. Nov 8 00:35:56.995041 systemd[1]: No hostname configured, using default hostname. Nov 8 00:35:56.995048 systemd[1]: Hostname set to . Nov 8 00:35:56.995055 systemd[1]: Initializing machine ID from random generator. Nov 8 00:35:56.995062 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:35:56.995069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:35:56.995094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:35:56.997132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:35:56.997146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:35:56.997154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:35:56.997162 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:35:56.997171 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:35:56.997179 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:35:56.997191 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:35:56.997199 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:35:56.997206 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:35:56.997214 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:35:56.997221 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:35:56.997228 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:35:56.997235 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:35:56.997243 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:35:56.997250 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:35:56.997260 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:35:56.997267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:35:56.997275 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:35:56.997282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:35:56.997289 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:35:56.997297 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:35:56.997304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:35:56.997311 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:35:56.997318 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:35:56.997328 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:35:56.997336 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:35:56.997364 systemd-journald[177]: Collecting audit messages is disabled. Nov 8 00:35:56.997382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:56.997393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:35:56.997404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:35:56.997411 systemd-journald[177]: Journal started Nov 8 00:35:56.997429 systemd-journald[177]: Runtime Journal (/run/log/journal/e577be7faabd49769a753bf494aafd34) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:35:57.004687 systemd-modules-load[178]: Inserted module 'overlay' Nov 8 00:35:57.092859 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:35:57.092888 kernel: Bridge firewalling registered Nov 8 00:35:57.029821 systemd-modules-load[178]: Inserted module 'br_netfilter' Nov 8 00:35:57.096823 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:35:57.097906 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:35:57.098858 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:35:57.100324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:57.109344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:57.111977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:35:57.115259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:35:57.124056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:35:57.137313 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:35:57.159579 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:57.162259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:35:57.164068 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:35:57.172238 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:35:57.178283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:35:57.182271 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:35:57.197133 dracut-cmdline[208]: dracut-dracut-053 Nov 8 00:35:57.198604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:35:57.203346 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:57.213454 systemd-resolved[211]: Positive Trust Anchors: Nov 8 00:35:57.213467 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:35:57.213495 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:35:57.217483 systemd-resolved[211]: Defaulting to hostname 'linux'. Nov 8 00:35:57.219037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:35:57.223157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:35:57.295155 kernel: SCSI subsystem initialized Nov 8 00:35:57.305135 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:35:57.317150 kernel: iscsi: registered transport (tcp) Nov 8 00:35:57.338603 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:35:57.338657 kernel: QLogic iSCSI HBA Driver Nov 8 00:35:57.394650 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:35:57.403266 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:35:57.432311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:35:57.432349 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:35:57.432370 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:35:57.478140 kernel: raid6: avx2x4 gen() 30956 MB/s Nov 8 00:35:57.499407 kernel: raid6: avx2x2 gen() 27104 MB/s Nov 8 00:35:57.517252 kernel: raid6: avx2x1 gen() 24183 MB/s Nov 8 00:35:57.517276 kernel: raid6: using algorithm avx2x4 gen() 30956 MB/s Nov 8 00:35:57.537445 kernel: raid6: .... xor() 5048 MB/s, rmw enabled Nov 8 00:35:57.537463 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:35:57.561146 kernel: xor: automatically using best checksumming function avx Nov 8 00:35:57.696151 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:35:57.711834 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:35:57.718258 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:35:57.743403 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:35:57.748010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:35:57.758224 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:35:57.774706 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 8 00:35:57.812818 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:35:57.820255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:35:57.889349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:35:57.895821 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:35:57.919347 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:35:57.921493 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:35:57.922516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:35:57.923663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:35:57.931233 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:35:57.947460 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:35:57.977130 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:35:57.980134 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:35:57.986164 kernel: libata version 3.00 loaded. Nov 8 00:35:57.993148 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:35:57.998135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:35:57.998253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:58.001320 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:58.003213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:35:58.003330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:58.005197 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:58.012348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:58.024127 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:35:58.027148 kernel: AES CTR mode by8 optimization enabled Nov 8 00:35:58.027172 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:35:58.166198 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:35:58.167181 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:35:58.167497 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:35:58.199136 kernel: scsi host1: ahci Nov 8 00:35:58.200137 kernel: scsi host2: ahci Nov 8 00:35:58.200317 kernel: scsi host3: ahci Nov 8 00:35:58.203693 kernel: scsi host4: ahci Nov 8 00:35:58.203983 kernel: scsi host5: ahci Nov 8 00:35:58.204178 kernel: scsi host6: ahci Nov 8 00:35:58.204338 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 33 Nov 8 00:35:58.204358 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 33 Nov 8 00:35:58.204368 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 33 Nov 8 00:35:58.204378 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 33 Nov 8 00:35:58.204387 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 33 Nov 8 00:35:58.204397 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 33 Nov 8 00:35:58.314034 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:58.323289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:58.344135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:58.520379 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.520419 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.520431 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.521128 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.523163 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.529183 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.548466 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:35:58.548770 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 8 00:35:58.573376 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:35:58.575610 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:35:58.575798 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:35:58.584811 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:35:58.584834 kernel: GPT:9289727 != 167739391 Nov 8 00:35:58.584855 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:35:58.587449 kernel: GPT:9289727 != 167739391 Nov 8 00:35:58.590441 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:35:58.590458 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:58.594377 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:35:58.635154 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (455) Nov 8 00:35:58.645134 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (447) Nov 8 00:35:58.643585 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:35:58.650243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:35:58.660178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:35:58.662163 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:35:58.668232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:35:58.680260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:35:58.685310 disk-uuid[569]: Primary Header is updated. Nov 8 00:35:58.685310 disk-uuid[569]: Secondary Entries is updated. Nov 8 00:35:58.685310 disk-uuid[569]: Secondary Header is updated. Nov 8 00:35:58.692268 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:58.699135 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:59.703186 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:59.704183 disk-uuid[570]: The operation has completed successfully. Nov 8 00:35:59.760347 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:35:59.760497 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:35:59.771270 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:35:59.776888 sh[584]: Success Nov 8 00:35:59.793152 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:35:59.844165 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:35:59.852217 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:35:59.854361 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:35:59.880153 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:35:59.880186 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:59.884443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:35:59.889945 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:35:59.889959 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:35:59.901126 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:35:59.902413 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:35:59.903838 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:35:59.915230 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:35:59.918383 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:35:59.932263 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:59.932293 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:59.935367 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:35:59.943542 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:35:59.943573 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:35:59.962280 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:59.961882 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:35:59.969760 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:35:59.979292 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:36:00.065475 ignition[679]: Ignition 2.19.0 Nov 8 00:36:00.065488 ignition[679]: Stage: fetch-offline Nov 8 00:36:00.065530 ignition[679]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:00.065585 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:00.072866 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:36:00.066287 ignition[679]: parsed url from cmdline: "" Nov 8 00:36:00.066292 ignition[679]: no config URL provided Nov 8 00:36:00.066299 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:36:00.066312 ignition[679]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:36:00.066318 ignition[679]: failed to fetch config: resource requires networking Nov 8 00:36:00.066527 ignition[679]: Ignition finished successfully Nov 8 00:36:00.082143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:36:00.094400 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:36:00.116360 systemd-networkd[770]: lo: Link UP Nov 8 00:36:00.116373 systemd-networkd[770]: lo: Gained carrier Nov 8 00:36:00.118076 systemd-networkd[770]: Enumeration completed Nov 8 00:36:00.118568 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:36:00.118572 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:36:00.119796 systemd-networkd[770]: eth0: Link UP Nov 8 00:36:00.119801 systemd-networkd[770]: eth0: Gained carrier Nov 8 00:36:00.119809 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:36:00.120049 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:36:00.122049 systemd[1]: Reached target network.target - Network. Nov 8 00:36:00.130270 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:36:00.145227 ignition[772]: Ignition 2.19.0 Nov 8 00:36:00.145243 ignition[772]: Stage: fetch Nov 8 00:36:00.145395 ignition[772]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:00.145408 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:00.145498 ignition[772]: parsed url from cmdline: "" Nov 8 00:36:00.145503 ignition[772]: no config URL provided Nov 8 00:36:00.145509 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:36:00.145519 ignition[772]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:36:00.145540 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 8 00:36:00.145699 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:36:00.345912 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 8 00:36:00.346190 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:36:00.746461 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 8 00:36:00.746661 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:36:00.788230 systemd-networkd[770]: eth0: DHCPv4 address 172.239.57.24/24, gateway 172.239.57.1 acquired from 23.215.118.212 Nov 8 00:36:01.195336 systemd-networkd[770]: eth0: Gained IPv6LL Nov 8 00:36:01.546816 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 8 00:36:01.644209 ignition[772]: PUT result: OK Nov 8 00:36:01.644273 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 8 00:36:01.757241 ignition[772]: GET result: OK Nov 8 00:36:01.757351 ignition[772]: parsing config with SHA512: 784754bb92aec266de9f26135f6f860272d05f555ebaab5fb70f8298cefb171e03ddfe6b98c4bba771cdd5049bee4e8d6bf722a2134703b85e9dadc4884db090 Nov 8 00:36:01.762434 unknown[772]: fetched base config from "system" Nov 8 00:36:01.762450 unknown[772]: fetched base config from "system" Nov 8 00:36:01.762768 ignition[772]: fetch: fetch complete Nov 8 00:36:01.762468 unknown[772]: fetched user config from "akamai" Nov 8 00:36:01.762775 ignition[772]: fetch: fetch passed Nov 8 00:36:01.765785 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:36:01.762829 ignition[772]: Ignition finished successfully Nov 8 00:36:01.777324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:36:01.795600 ignition[780]: Ignition 2.19.0 Nov 8 00:36:01.795617 ignition[780]: Stage: kargs Nov 8 00:36:01.795793 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:01.795806 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:01.799898 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:36:01.796554 ignition[780]: kargs: kargs passed Nov 8 00:36:01.796604 ignition[780]: Ignition finished successfully Nov 8 00:36:01.807267 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:36:01.834072 ignition[786]: Ignition 2.19.0 Nov 8 00:36:01.834083 ignition[786]: Stage: disks Nov 8 00:36:01.834279 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:01.840810 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:36:01.834292 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:01.860189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:36:01.838483 ignition[786]: disks: disks passed Nov 8 00:36:01.861719 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:36:01.838581 ignition[786]: Ignition finished successfully Nov 8 00:36:01.863779 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:36:01.865501 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:36:01.867155 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:36:01.883317 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:36:01.903249 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:36:01.908024 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:36:01.918260 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:36:02.007608 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:36:02.008033 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:36:02.009437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:36:02.025238 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:36:02.028360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:36:02.030310 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:36:02.031743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:36:02.031823 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:36:02.042452 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (802) Nov 8 00:36:02.047042 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:36:02.047088 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:36:02.047011 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:36:02.051756 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:36:02.058129 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:36:02.058166 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:36:02.060309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:36:02.063033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:36:02.109535 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:36:02.115198 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:36:02.120099 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:36:02.124403 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:36:02.217555 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:36:02.226196 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:36:02.230243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:36:02.243783 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:36:02.249907 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:36:02.268490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:36:02.274929 ignition[916]: INFO : Ignition 2.19.0 Nov 8 00:36:02.276032 ignition[916]: INFO : Stage: mount Nov 8 00:36:02.277346 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:02.277346 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:02.280175 ignition[916]: INFO : mount: mount passed Nov 8 00:36:02.281042 ignition[916]: INFO : Ignition finished successfully Nov 8 00:36:02.282917 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:36:02.296220 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:36:03.014282 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:36:03.029524 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (928) Nov 8 00:36:03.029569 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:36:03.033303 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:36:03.036220 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:36:03.042593 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:36:03.042686 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:36:03.047282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:36:03.074016 ignition[944]: INFO : Ignition 2.19.0 Nov 8 00:36:03.075053 ignition[944]: INFO : Stage: files Nov 8 00:36:03.076780 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:03.076780 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:03.076780 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:36:03.079635 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:36:03.079635 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:36:03.083594 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:36:03.084818 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:36:03.086226 unknown[944]: wrote ssh authorized keys file for user: core Nov 8 00:36:03.087341 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:36:03.088483 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:36:03.089722 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:36:03.375079 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:36:03.595841 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:36:03.595841 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:36:04.420510 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:36:04.858834 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:04.858834 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:36:04.863720 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:36:04.863720 ignition[944]: INFO : files: files passed Nov 8 00:36:04.863720 ignition[944]: INFO : Ignition finished successfully Nov 8 00:36:04.862855 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:36:04.881323 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:36:04.894320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:36:04.895458 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:36:04.895580 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:36:04.915198 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:36:04.915198 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:36:04.918218 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:36:04.920293 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:36:04.922746 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:36:04.928284 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:36:04.955627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:36:04.955789 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:36:04.958063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:36:04.960059 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:36:04.960939 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:36:04.967314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:36:04.981874 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:36:04.989265 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:36:04.999834 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:36:05.001290 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:36:05.002999 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:36:05.004653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:36:05.004763 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:36:05.006630 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:36:05.007688 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:36:05.009329 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:36:05.010820 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:36:05.012324 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:36:05.015950 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:36:05.016773 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:36:05.017656 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:36:05.019320 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:36:05.020995 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:36:05.022541 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:36:05.022677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:36:05.024528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:36:05.025622 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:36:05.027142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:36:05.027255 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:36:05.028846 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:36:05.028949 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:36:05.031173 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:36:05.031288 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:36:05.032345 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:36:05.032447 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:36:05.040646 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:36:05.043521 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:36:05.044694 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:36:05.044854 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:36:05.051329 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:36:05.051441 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:36:05.058773 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:36:05.058942 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:36:05.064184 ignition[998]: INFO : Ignition 2.19.0 Nov 8 00:36:05.064184 ignition[998]: INFO : Stage: umount Nov 8 00:36:05.064184 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:05.064184 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:05.064184 ignition[998]: INFO : umount: umount passed Nov 8 00:36:05.064184 ignition[998]: INFO : Ignition finished successfully Nov 8 00:36:05.065528 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:36:05.065661 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:36:05.071230 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:36:05.071290 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:36:05.072047 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:36:05.072100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:36:05.074272 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:36:05.074324 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:36:05.075491 systemd[1]: Stopped target network.target - Network. Nov 8 00:36:05.078239 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:36:05.078295 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:36:05.079605 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:36:05.080293 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:36:05.106846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:36:05.108643 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:36:05.110049 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:36:05.111703 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:36:05.111767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:36:05.113459 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:36:05.113507 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:36:05.114891 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:36:05.114957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:36:05.116372 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:36:05.116425 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:36:05.117990 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:36:05.119652 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:36:05.122422 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:36:05.123033 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:36:05.123174 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:36:05.123229 systemd-networkd[770]: eth0: DHCPv6 lease lost Nov 8 00:36:05.127677 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:36:05.127771 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:36:05.130089 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:36:05.130364 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:36:05.133569 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:36:05.133761 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:36:05.135438 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:36:05.135504 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:36:05.153532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:36:05.154282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:36:05.154343 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:36:05.155177 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:36:05.155250 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:36:05.156798 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:36:05.156849 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:36:05.158477 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:36:05.158528 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:36:05.160463 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:36:05.176301 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:36:05.176436 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:36:05.180854 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:36:05.181054 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:36:05.182836 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:36:05.182892 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:36:05.184388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:36:05.184429 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:36:05.186010 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:36:05.186064 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:36:05.188297 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:36:05.188348 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:36:05.189969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:36:05.190020 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:36:05.197251 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:36:05.198371 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:36:05.198429 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:36:05.199246 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:36:05.199300 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:36:05.200069 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:36:05.200153 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:36:05.201875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:36:05.201926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:36:05.210503 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:36:05.210621 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:36:05.212084 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:36:05.222371 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:36:05.229539 systemd[1]: Switching root. Nov 8 00:36:05.256379 systemd-journald[177]: Journal stopped Nov 8 00:35:56.982202 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:35:56.982224 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:56.982232 kernel: BIOS-provided physical RAM map: Nov 8 00:35:56.982238 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 8 00:35:56.982243 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 8 00:35:56.982253 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:35:56.982259 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 8 00:35:56.982265 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 8 00:35:56.982271 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:35:56.982276 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:35:56.982282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:35:56.982287 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:35:56.982293 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 8 00:35:56.982301 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:35:56.982308 kernel: NX (Execute Disable) protection: active Nov 8 00:35:56.982314 kernel: APIC: Static calls initialized Nov 8 00:35:56.982320 kernel: SMBIOS 2.8 present. Nov 8 00:35:56.982326 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 8 00:35:56.982332 kernel: Hypervisor detected: KVM Nov 8 00:35:56.982341 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:35:56.982347 kernel: kvm-clock: using sched offset of 5583827632 cycles Nov 8 00:35:56.982353 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:35:56.982359 kernel: tsc: Detected 1999.997 MHz processor Nov 8 00:35:56.982366 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:35:56.982373 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:35:56.982379 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 8 00:35:56.982385 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:35:56.982391 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:35:56.982399 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 8 00:35:56.982406 kernel: Using GB pages for direct mapping Nov 8 00:35:56.982412 kernel: ACPI: Early table checksum verification disabled Nov 8 00:35:56.982418 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 8 00:35:56.982424 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982430 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982436 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982442 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:35:56.982448 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982457 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982463 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982469 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:35:56.982479 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 8 00:35:56.982485 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 8 00:35:56.982492 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:35:56.982500 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 8 00:35:56.982507 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 8 00:35:56.982513 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 8 00:35:56.982520 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 8 00:35:56.982526 kernel: No NUMA configuration found Nov 8 00:35:56.982532 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 8 00:35:56.982539 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Nov 8 00:35:56.982545 kernel: Zone ranges: Nov 8 00:35:56.982554 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:35:56.982560 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:35:56.982566 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:35:56.982572 kernel: Movable zone start for each node Nov 8 00:35:56.982579 kernel: Early memory node ranges Nov 8 00:35:56.982585 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:35:56.982591 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 8 00:35:56.982598 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:35:56.982604 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 8 00:35:56.982610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:35:56.982619 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:35:56.982625 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 8 00:35:56.982632 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:35:56.982638 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:35:56.982645 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:35:56.982651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:35:56.982657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:35:56.982664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:35:56.982670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:35:56.982679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:35:56.982685 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:35:56.982692 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:35:56.982698 kernel: TSC deadline timer available Nov 8 00:35:56.982705 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:35:56.982711 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:35:56.982717 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:35:56.982723 kernel: kvm-guest: setup PV sched yield Nov 8 00:35:56.982730 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:35:56.982739 kernel: Booting paravirtualized kernel on KVM Nov 8 00:35:56.982747 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:35:56.982753 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:35:56.982760 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:35:56.982766 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:35:56.982773 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:35:56.982779 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:35:56.982785 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:35:56.982792 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:56.982802 kernel: random: crng init done Nov 8 00:35:56.982808 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:35:56.982814 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:35:56.982821 kernel: Fallback order for Node 0: 0 Nov 8 00:35:56.982827 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 8 00:35:56.982834 kernel: Policy zone: Normal Nov 8 00:35:56.982840 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:35:56.982847 kernel: software IO TLB: area num 2. Nov 8 00:35:56.982856 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 227300K reserved, 0K cma-reserved) Nov 8 00:35:56.982863 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:35:56.982870 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:35:56.982876 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:35:56.982882 kernel: Dynamic Preempt: voluntary Nov 8 00:35:56.982889 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:35:56.982896 kernel: rcu: RCU event tracing is enabled. Nov 8 00:35:56.982903 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:35:56.982910 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:35:56.982919 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:35:56.982925 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:35:56.982932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:35:56.982939 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:35:56.982945 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:35:56.982952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:35:56.982958 kernel: Console: colour VGA+ 80x25 Nov 8 00:35:56.982965 kernel: printk: console [tty0] enabled Nov 8 00:35:56.982971 kernel: printk: console [ttyS0] enabled Nov 8 00:35:56.982981 kernel: ACPI: Core revision 20230628 Nov 8 00:35:56.982987 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:35:56.982994 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:35:56.983000 kernel: x2apic enabled Nov 8 00:35:56.983017 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:35:56.983026 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:35:56.983033 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:35:56.983040 kernel: kvm-guest: setup PV IPIs Nov 8 00:35:56.983046 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:35:56.983053 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:35:56.983060 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 8 00:35:56.983067 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:35:56.983076 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:35:56.983082 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:35:56.983103 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:35:56.983125 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:35:56.983132 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:35:56.983143 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:35:56.983150 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:35:56.983157 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:35:56.983164 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:35:56.983171 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:35:56.983178 kernel: active return thunk: srso_alias_return_thunk Nov 8 00:35:56.983185 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:35:56.983191 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 8 00:35:56.983201 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:35:56.983208 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:35:56.983215 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:35:56.983221 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:35:56.983228 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:35:56.983235 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:35:56.983241 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 8 00:35:56.983249 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 8 00:35:56.983256 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:35:56.983265 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:35:56.983271 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:35:56.983278 kernel: landlock: Up and running. Nov 8 00:35:56.983285 kernel: SELinux: Initializing. Nov 8 00:35:56.983292 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.983298 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.983305 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 8 00:35:56.983312 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:56.983319 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:56.983328 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:35:56.983335 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:35:56.983341 kernel: ... version: 0 Nov 8 00:35:56.983348 kernel: ... bit width: 48 Nov 8 00:35:56.983355 kernel: ... generic registers: 6 Nov 8 00:35:56.983362 kernel: ... value mask: 0000ffffffffffff Nov 8 00:35:56.983369 kernel: ... max period: 00007fffffffffff Nov 8 00:35:56.983375 kernel: ... fixed-purpose events: 0 Nov 8 00:35:56.983382 kernel: ... event mask: 000000000000003f Nov 8 00:35:56.983391 kernel: signal: max sigframe size: 3376 Nov 8 00:35:56.983398 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:35:56.983405 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:35:56.983411 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:35:56.983418 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:35:56.983424 kernel: .... node #0, CPUs: #1 Nov 8 00:35:56.983431 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:35:56.983438 kernel: smpboot: Max logical packages: 1 Nov 8 00:35:56.983444 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 8 00:35:56.983453 kernel: devtmpfs: initialized Nov 8 00:35:56.983460 kernel: x86/mm: Memory block size: 128MB Nov 8 00:35:56.983467 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:35:56.983474 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:35:56.983481 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:35:56.983487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:35:56.983494 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:35:56.983502 kernel: audit: type=2000 audit(1762562155.970:1): state=initialized audit_enabled=0 res=1 Nov 8 00:35:56.983508 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:35:56.983517 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:35:56.983524 kernel: cpuidle: using governor menu Nov 8 00:35:56.983531 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:35:56.983537 kernel: dca service started, version 1.12.1 Nov 8 00:35:56.983544 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:35:56.983551 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:35:56.983557 kernel: PCI: Using configuration type 1 for base access Nov 8 00:35:56.983564 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:35:56.983571 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:35:56.983580 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:35:56.983587 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:35:56.983594 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:35:56.983601 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:35:56.983607 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:35:56.983615 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:35:56.983621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:35:56.983628 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:35:56.983635 kernel: ACPI: Interpreter enabled Nov 8 00:35:56.983644 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:35:56.983651 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:35:56.983658 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:35:56.983664 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:35:56.983671 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:35:56.983678 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:35:56.983861 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:35:56.984002 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:35:56.984174 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:35:56.984185 kernel: PCI host bridge to bus 0000:00 Nov 8 00:35:56.984323 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:35:56.984465 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:35:56.984598 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:35:56.984719 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 00:35:56.984836 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:35:56.984973 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 8 00:35:56.985089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:35:56.985258 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:35:56.985395 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:35:56.985521 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:35:56.985645 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:35:56.985780 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:35:56.985906 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:35:56.986041 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:35:56.986197 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:35:56.986326 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:35:56.986450 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:35:56.986583 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:35:56.986716 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:35:56.986840 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:35:56.986964 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:35:56.987090 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:35:56.989280 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:35:56.989418 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:35:56.989553 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:35:56.989688 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 8 00:35:56.989812 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:35:56.989945 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:35:56.990071 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:35:56.990080 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:35:56.990087 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:35:56.990094 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:35:56.990104 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:35:56.990127 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:35:56.990134 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:35:56.990141 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:35:56.990148 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:35:56.990154 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:35:56.990161 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:35:56.990168 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:35:56.990174 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:35:56.990185 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:35:56.990192 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:35:56.990198 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:35:56.990205 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:35:56.990212 kernel: iommu: Default domain type: Translated Nov 8 00:35:56.990219 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:35:56.990226 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:35:56.990233 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:35:56.990240 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 8 00:35:56.990250 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 8 00:35:56.990382 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:35:56.990508 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:35:56.992232 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:35:56.992246 kernel: vgaarb: loaded Nov 8 00:35:56.992254 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:35:56.992261 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:35:56.992268 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:35:56.992279 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:35:56.992286 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:35:56.992292 kernel: pnp: PnP ACPI init Nov 8 00:35:56.992441 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:35:56.992452 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:35:56.992459 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:35:56.992465 kernel: NET: Registered PF_INET protocol family Nov 8 00:35:56.992473 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:35:56.992483 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:35:56.992490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:35:56.992497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:35:56.992504 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:35:56.992510 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:35:56.992517 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.992524 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:35:56.992531 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:35:56.992537 kernel: NET: Registered PF_XDP protocol family Nov 8 00:35:56.992658 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:35:56.992774 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:35:56.992891 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:35:56.993007 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 00:35:56.993863 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:35:56.993990 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 8 00:35:56.994000 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:35:56.994008 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:35:56.994019 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 8 00:35:56.994026 kernel: Initialise system trusted keyrings Nov 8 00:35:56.994034 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:35:56.994041 kernel: Key type asymmetric registered Nov 8 00:35:56.994048 kernel: Asymmetric key parser 'x509' registered Nov 8 00:35:56.994055 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:35:56.994062 kernel: io scheduler mq-deadline registered Nov 8 00:35:56.994068 kernel: io scheduler kyber registered Nov 8 00:35:56.994075 kernel: io scheduler bfq registered Nov 8 00:35:56.994081 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:35:56.994091 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:35:56.994098 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:35:56.994270 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:35:56.994279 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:35:56.994286 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:35:56.994292 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:35:56.994299 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:35:56.994437 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:35:56.994452 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:35:56.994571 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:35:56.994688 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:35:56 UTC (1762562156) Nov 8 00:35:56.994807 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:35:56.994821 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:35:56.994828 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:35:56.994834 kernel: Segment Routing with IPv6 Nov 8 00:35:56.994841 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:35:56.994852 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:35:56.994859 kernel: Key type dns_resolver registered Nov 8 00:35:56.994866 kernel: IPI shorthand broadcast: enabled Nov 8 00:35:56.994873 kernel: sched_clock: Marking stable (851004688, 309200964)->(1285218847, -125013195) Nov 8 00:35:56.994879 kernel: registered taskstats version 1 Nov 8 00:35:56.994886 kernel: Loading compiled-in X.509 certificates Nov 8 00:35:56.994893 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:35:56.994900 kernel: Key type .fscrypt registered Nov 8 00:35:56.994906 kernel: Key type fscrypt-provisioning registered Nov 8 00:35:56.994915 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:35:56.994922 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:35:56.994929 kernel: ima: No architecture policies found Nov 8 00:35:56.994936 kernel: clk: Disabling unused clocks Nov 8 00:35:56.994943 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:35:56.994950 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:35:56.994956 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:35:56.994963 kernel: Run /init as init process Nov 8 00:35:56.994970 kernel: with arguments: Nov 8 00:35:56.994979 kernel: /init Nov 8 00:35:56.994986 kernel: with environment: Nov 8 00:35:56.994992 kernel: HOME=/ Nov 8 00:35:56.994999 kernel: TERM=linux Nov 8 00:35:56.995008 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:35:56.995017 systemd[1]: Detected virtualization kvm. Nov 8 00:35:56.995025 systemd[1]: Detected architecture x86-64. Nov 8 00:35:56.995032 systemd[1]: Running in initrd. Nov 8 00:35:56.995041 systemd[1]: No hostname configured, using default hostname. Nov 8 00:35:56.995048 systemd[1]: Hostname set to . Nov 8 00:35:56.995055 systemd[1]: Initializing machine ID from random generator. Nov 8 00:35:56.995062 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:35:56.995069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:35:56.995094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:35:56.997132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:35:56.997146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:35:56.997154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:35:56.997162 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:35:56.997171 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:35:56.997179 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:35:56.997191 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:35:56.997199 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:35:56.997206 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:35:56.997214 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:35:56.997221 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:35:56.997228 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:35:56.997235 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:35:56.997243 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:35:56.997250 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:35:56.997260 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:35:56.997267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:35:56.997275 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:35:56.997282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:35:56.997289 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:35:56.997297 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:35:56.997304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:35:56.997311 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:35:56.997318 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:35:56.997328 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:35:56.997336 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:35:56.997364 systemd-journald[177]: Collecting audit messages is disabled. Nov 8 00:35:56.997382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:56.997393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:35:56.997404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:35:56.997411 systemd-journald[177]: Journal started Nov 8 00:35:56.997429 systemd-journald[177]: Runtime Journal (/run/log/journal/e577be7faabd49769a753bf494aafd34) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:35:57.004687 systemd-modules-load[178]: Inserted module 'overlay' Nov 8 00:35:57.092859 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:35:57.092888 kernel: Bridge firewalling registered Nov 8 00:35:57.029821 systemd-modules-load[178]: Inserted module 'br_netfilter' Nov 8 00:35:57.096823 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:35:57.097906 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:35:57.098858 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:35:57.100324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:57.109344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:57.111977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:35:57.115259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:35:57.124056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:35:57.137313 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:35:57.159579 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:57.162259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:35:57.164068 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:35:57.172238 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:35:57.178283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:35:57.182271 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:35:57.197133 dracut-cmdline[208]: dracut-dracut-053 Nov 8 00:35:57.198604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:35:57.203346 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:35:57.213454 systemd-resolved[211]: Positive Trust Anchors: Nov 8 00:35:57.213467 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:35:57.213495 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:35:57.217483 systemd-resolved[211]: Defaulting to hostname 'linux'. Nov 8 00:35:57.219037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:35:57.223157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:35:57.295155 kernel: SCSI subsystem initialized Nov 8 00:35:57.305135 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:35:57.317150 kernel: iscsi: registered transport (tcp) Nov 8 00:35:57.338603 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:35:57.338657 kernel: QLogic iSCSI HBA Driver Nov 8 00:35:57.394650 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:35:57.403266 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:35:57.432311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:35:57.432349 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:35:57.432370 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:35:57.478140 kernel: raid6: avx2x4 gen() 30956 MB/s Nov 8 00:35:57.499407 kernel: raid6: avx2x2 gen() 27104 MB/s Nov 8 00:35:57.517252 kernel: raid6: avx2x1 gen() 24183 MB/s Nov 8 00:35:57.517276 kernel: raid6: using algorithm avx2x4 gen() 30956 MB/s Nov 8 00:35:57.537445 kernel: raid6: .... xor() 5048 MB/s, rmw enabled Nov 8 00:35:57.537463 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:35:57.561146 kernel: xor: automatically using best checksumming function avx Nov 8 00:35:57.696151 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:35:57.711834 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:35:57.718258 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:35:57.743403 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:35:57.748010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:35:57.758224 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:35:57.774706 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 8 00:35:57.812818 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:35:57.820255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:35:57.889349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:35:57.895821 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:35:57.919347 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:35:57.921493 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:35:57.922516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:35:57.923663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:35:57.931233 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:35:57.947460 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:35:57.977130 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:35:57.980134 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:35:57.986164 kernel: libata version 3.00 loaded. Nov 8 00:35:57.993148 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:35:57.998135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:35:57.998253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:58.001320 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:58.003213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:35:58.003330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:58.005197 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:58.012348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:35:58.024127 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:35:58.027148 kernel: AES CTR mode by8 optimization enabled Nov 8 00:35:58.027172 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:35:58.166198 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:35:58.167181 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:35:58.167497 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:35:58.199136 kernel: scsi host1: ahci Nov 8 00:35:58.200137 kernel: scsi host2: ahci Nov 8 00:35:58.200317 kernel: scsi host3: ahci Nov 8 00:35:58.203693 kernel: scsi host4: ahci Nov 8 00:35:58.203983 kernel: scsi host5: ahci Nov 8 00:35:58.204178 kernel: scsi host6: ahci Nov 8 00:35:58.204338 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 33 Nov 8 00:35:58.204358 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 33 Nov 8 00:35:58.204368 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 33 Nov 8 00:35:58.204378 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 33 Nov 8 00:35:58.204387 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 33 Nov 8 00:35:58.204397 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 33 Nov 8 00:35:58.314034 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:35:58.323289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:35:58.344135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:35:58.520379 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.520419 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.520431 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.521128 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.523163 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.529183 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:35:58.548466 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:35:58.548770 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 8 00:35:58.573376 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:35:58.575610 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:35:58.575798 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:35:58.584811 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:35:58.584834 kernel: GPT:9289727 != 167739391 Nov 8 00:35:58.584855 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:35:58.587449 kernel: GPT:9289727 != 167739391 Nov 8 00:35:58.590441 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:35:58.590458 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:58.594377 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:35:58.635154 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (455) Nov 8 00:35:58.645134 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (447) Nov 8 00:35:58.643585 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:35:58.650243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:35:58.660178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:35:58.662163 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:35:58.668232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:35:58.680260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:35:58.685310 disk-uuid[569]: Primary Header is updated. Nov 8 00:35:58.685310 disk-uuid[569]: Secondary Entries is updated. Nov 8 00:35:58.685310 disk-uuid[569]: Secondary Header is updated. Nov 8 00:35:58.692268 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:58.699135 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:59.703186 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:35:59.704183 disk-uuid[570]: The operation has completed successfully. Nov 8 00:35:59.760347 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:35:59.760497 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:35:59.771270 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:35:59.776888 sh[584]: Success Nov 8 00:35:59.793152 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:35:59.844165 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:35:59.852217 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:35:59.854361 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:35:59.880153 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:35:59.880186 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:59.884443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:35:59.889945 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:35:59.889959 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:35:59.901126 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:35:59.902413 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:35:59.903838 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:35:59.915230 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:35:59.918383 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:35:59.932263 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:59.932293 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:35:59.935367 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:35:59.943542 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:35:59.943573 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:35:59.962280 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:35:59.961882 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:35:59.969760 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:35:59.979292 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:36:00.065475 ignition[679]: Ignition 2.19.0 Nov 8 00:36:00.065488 ignition[679]: Stage: fetch-offline Nov 8 00:36:00.065530 ignition[679]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:00.065585 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:00.072866 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:36:00.066287 ignition[679]: parsed url from cmdline: "" Nov 8 00:36:00.066292 ignition[679]: no config URL provided Nov 8 00:36:00.066299 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:36:00.066312 ignition[679]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:36:00.066318 ignition[679]: failed to fetch config: resource requires networking Nov 8 00:36:00.066527 ignition[679]: Ignition finished successfully Nov 8 00:36:00.082143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:36:00.094400 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:36:00.116360 systemd-networkd[770]: lo: Link UP Nov 8 00:36:00.116373 systemd-networkd[770]: lo: Gained carrier Nov 8 00:36:00.118076 systemd-networkd[770]: Enumeration completed Nov 8 00:36:00.118568 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:36:00.118572 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:36:00.119796 systemd-networkd[770]: eth0: Link UP Nov 8 00:36:00.119801 systemd-networkd[770]: eth0: Gained carrier Nov 8 00:36:00.119809 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:36:00.120049 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:36:00.122049 systemd[1]: Reached target network.target - Network. Nov 8 00:36:00.130270 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:36:00.145227 ignition[772]: Ignition 2.19.0 Nov 8 00:36:00.145243 ignition[772]: Stage: fetch Nov 8 00:36:00.145395 ignition[772]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:00.145408 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:00.145498 ignition[772]: parsed url from cmdline: "" Nov 8 00:36:00.145503 ignition[772]: no config URL provided Nov 8 00:36:00.145509 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:36:00.145519 ignition[772]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:36:00.145540 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 8 00:36:00.145699 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:36:00.345912 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 8 00:36:00.346190 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:36:00.746461 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 8 00:36:00.746661 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:36:00.788230 systemd-networkd[770]: eth0: DHCPv4 address 172.239.57.24/24, gateway 172.239.57.1 acquired from 23.215.118.212 Nov 8 00:36:01.195336 systemd-networkd[770]: eth0: Gained IPv6LL Nov 8 00:36:01.546816 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 8 00:36:01.644209 ignition[772]: PUT result: OK Nov 8 00:36:01.644273 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 8 00:36:01.757241 ignition[772]: GET result: OK Nov 8 00:36:01.757351 ignition[772]: parsing config with SHA512: 784754bb92aec266de9f26135f6f860272d05f555ebaab5fb70f8298cefb171e03ddfe6b98c4bba771cdd5049bee4e8d6bf722a2134703b85e9dadc4884db090 Nov 8 00:36:01.762434 unknown[772]: fetched base config from "system" Nov 8 00:36:01.762450 unknown[772]: fetched base config from "system" Nov 8 00:36:01.762768 ignition[772]: fetch: fetch complete Nov 8 00:36:01.762468 unknown[772]: fetched user config from "akamai" Nov 8 00:36:01.762775 ignition[772]: fetch: fetch passed Nov 8 00:36:01.765785 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:36:01.762829 ignition[772]: Ignition finished successfully Nov 8 00:36:01.777324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:36:01.795600 ignition[780]: Ignition 2.19.0 Nov 8 00:36:01.795617 ignition[780]: Stage: kargs Nov 8 00:36:01.795793 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:01.795806 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:01.799898 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:36:01.796554 ignition[780]: kargs: kargs passed Nov 8 00:36:01.796604 ignition[780]: Ignition finished successfully Nov 8 00:36:01.807267 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:36:01.834072 ignition[786]: Ignition 2.19.0 Nov 8 00:36:01.834083 ignition[786]: Stage: disks Nov 8 00:36:01.834279 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:01.840810 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:36:01.834292 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:01.860189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:36:01.838483 ignition[786]: disks: disks passed Nov 8 00:36:01.861719 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:36:01.838581 ignition[786]: Ignition finished successfully Nov 8 00:36:01.863779 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:36:01.865501 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:36:01.867155 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:36:01.883317 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:36:01.903249 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:36:01.908024 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:36:01.918260 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:36:02.007608 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:36:02.008033 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:36:02.009437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:36:02.025238 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:36:02.028360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:36:02.030310 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:36:02.031743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:36:02.031823 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:36:02.042452 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (802) Nov 8 00:36:02.047042 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:36:02.047088 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:36:02.047011 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:36:02.051756 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:36:02.058129 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:36:02.058166 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:36:02.060309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:36:02.063033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:36:02.109535 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:36:02.115198 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:36:02.120099 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:36:02.124403 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:36:02.217555 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:36:02.226196 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:36:02.230243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:36:02.243783 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:36:02.249907 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:36:02.268490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:36:02.274929 ignition[916]: INFO : Ignition 2.19.0 Nov 8 00:36:02.276032 ignition[916]: INFO : Stage: mount Nov 8 00:36:02.277346 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:02.277346 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:02.280175 ignition[916]: INFO : mount: mount passed Nov 8 00:36:02.281042 ignition[916]: INFO : Ignition finished successfully Nov 8 00:36:02.282917 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:36:02.296220 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:36:03.014282 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:36:03.029524 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (928) Nov 8 00:36:03.029569 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:36:03.033303 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:36:03.036220 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:36:03.042593 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:36:03.042686 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:36:03.047282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:36:03.074016 ignition[944]: INFO : Ignition 2.19.0 Nov 8 00:36:03.075053 ignition[944]: INFO : Stage: files Nov 8 00:36:03.076780 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:03.076780 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:03.076780 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:36:03.079635 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:36:03.079635 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:36:03.083594 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:36:03.084818 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:36:03.086226 unknown[944]: wrote ssh authorized keys file for user: core Nov 8 00:36:03.087341 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:36:03.088483 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:36:03.089722 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:36:03.375079 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:36:03.595841 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:36:03.595841 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:03.598549 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:36:04.420510 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:36:04.858834 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:36:04.858834 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:36:04.863720 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:36:04.863720 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:36:04.863720 ignition[944]: INFO : files: files passed Nov 8 00:36:04.863720 ignition[944]: INFO : Ignition finished successfully Nov 8 00:36:04.862855 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:36:04.881323 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:36:04.894320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:36:04.895458 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:36:04.895580 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:36:04.915198 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:36:04.915198 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:36:04.918218 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:36:04.920293 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:36:04.922746 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:36:04.928284 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:36:04.955627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:36:04.955789 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:36:04.958063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:36:04.960059 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:36:04.960939 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:36:04.967314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:36:04.981874 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:36:04.989265 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:36:04.999834 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:36:05.001290 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:36:05.002999 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:36:05.004653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:36:05.004763 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:36:05.006630 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:36:05.007688 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:36:05.009329 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:36:05.010820 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:36:05.012324 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:36:05.015950 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:36:05.016773 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:36:05.017656 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:36:05.019320 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:36:05.020995 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:36:05.022541 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:36:05.022677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:36:05.024528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:36:05.025622 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:36:05.027142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:36:05.027255 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:36:05.028846 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:36:05.028949 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:36:05.031173 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:36:05.031288 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:36:05.032345 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:36:05.032447 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:36:05.040646 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:36:05.043521 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:36:05.044694 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:36:05.044854 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:36:05.051329 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:36:05.051441 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:36:05.058773 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:36:05.058942 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:36:05.064184 ignition[998]: INFO : Ignition 2.19.0 Nov 8 00:36:05.064184 ignition[998]: INFO : Stage: umount Nov 8 00:36:05.064184 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:36:05.064184 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:36:05.064184 ignition[998]: INFO : umount: umount passed Nov 8 00:36:05.064184 ignition[998]: INFO : Ignition finished successfully Nov 8 00:36:05.065528 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:36:05.065661 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:36:05.071230 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:36:05.071290 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:36:05.072047 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:36:05.072100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:36:05.074272 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:36:05.074324 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:36:05.075491 systemd[1]: Stopped target network.target - Network. Nov 8 00:36:05.078239 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:36:05.078295 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:36:05.079605 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:36:05.080293 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:36:05.106846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:36:05.108643 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:36:05.110049 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:36:05.111703 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:36:05.111767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:36:05.113459 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:36:05.113507 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:36:05.114891 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:36:05.114957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:36:05.116372 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:36:05.116425 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:36:05.117990 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:36:05.119652 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:36:05.122422 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:36:05.123033 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:36:05.123174 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:36:05.123229 systemd-networkd[770]: eth0: DHCPv6 lease lost Nov 8 00:36:05.127677 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:36:05.127771 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:36:05.130089 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:36:05.130364 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:36:05.133569 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:36:05.133761 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:36:05.135438 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:36:05.135504 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:36:05.153532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:36:05.154282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:36:05.154343 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:36:05.155177 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:36:05.155250 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:36:05.156798 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:36:05.156849 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:36:05.158477 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:36:05.158528 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:36:05.160463 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:36:05.176301 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:36:05.176436 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:36:05.180854 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:36:05.181054 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:36:05.182836 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:36:05.182892 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:36:05.184388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:36:05.184429 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:36:05.186010 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:36:05.186064 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:36:05.188297 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:36:05.188348 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:36:05.189969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:36:05.190020 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:36:05.197251 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:36:05.198371 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:36:05.198429 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:36:05.199246 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:36:05.199300 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:36:05.200069 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:36:05.200153 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:36:05.201875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:36:05.201926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:36:05.210503 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:36:05.210621 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:36:05.212084 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:36:05.222371 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:36:05.229539 systemd[1]: Switching root. Nov 8 00:36:05.256379 systemd-journald[177]: Journal stopped Nov 8 00:36:06.415432 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Nov 8 00:36:06.415470 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:36:06.415483 kernel: SELinux: policy capability open_perms=1 Nov 8 00:36:06.415492 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:36:06.415505 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:36:06.415515 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:36:06.415525 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:36:06.415535 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:36:06.415544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:36:06.415554 kernel: audit: type=1403 audit(1762562165.412:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:36:06.415564 systemd[1]: Successfully loaded SELinux policy in 51.698ms. Nov 8 00:36:06.415578 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.794ms. Nov 8 00:36:06.415591 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:36:06.415603 systemd[1]: Detected virtualization kvm. Nov 8 00:36:06.415613 systemd[1]: Detected architecture x86-64. Nov 8 00:36:06.415623 systemd[1]: Detected first boot. Nov 8 00:36:06.415636 systemd[1]: Initializing machine ID from random generator. Nov 8 00:36:06.415646 zram_generator::config[1042]: No configuration found. Nov 8 00:36:06.415658 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:36:06.415668 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:36:06.415677 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:36:06.415688 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:36:06.415699 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:36:06.415712 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:36:06.415722 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:36:06.415732 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:36:06.415743 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:36:06.415753 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:36:06.415763 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:36:06.415773 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:36:06.415786 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:36:06.415796 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:36:06.415806 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:36:06.415817 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:36:06.415827 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:36:06.415837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:36:06.415848 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:36:06.415857 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:36:06.415870 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:36:06.415880 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:36:06.415894 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:36:06.415905 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:36:06.415916 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:36:06.415926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:36:06.415936 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:36:06.415946 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:36:06.415960 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:36:06.415970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:36:06.415980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:36:06.415990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:36:06.416001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:36:06.416014 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:36:06.416025 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:36:06.416035 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:36:06.416046 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:36:06.416057 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:06.416068 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:36:06.416078 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:36:06.416089 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:36:06.416102 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:36:06.416140 systemd[1]: Reached target machines.target - Containers. Nov 8 00:36:06.416151 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:36:06.416162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:36:06.416173 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:36:06.416184 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:36:06.416194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:36:06.416205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:36:06.416219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:36:06.416229 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:36:06.416240 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:36:06.416251 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:36:06.416261 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:36:06.416271 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:36:06.416281 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:36:06.416291 kernel: fuse: init (API version 7.39) Nov 8 00:36:06.416306 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:36:06.416316 kernel: ACPI: bus type drm_connector registered Nov 8 00:36:06.416326 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:36:06.416336 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:36:06.416346 kernel: loop: module loaded Nov 8 00:36:06.416356 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:36:06.416392 systemd-journald[1124]: Collecting audit messages is disabled. Nov 8 00:36:06.416419 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:36:06.416430 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:36:06.416440 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:36:06.416452 systemd-journald[1124]: Journal started Nov 8 00:36:06.416476 systemd-journald[1124]: Runtime Journal (/run/log/journal/79d266eb24554dd2850d33f98d6ff048) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:36:06.027556 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:36:06.043343 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:36:06.043995 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:36:06.421997 systemd[1]: Stopped verity-setup.service. Nov 8 00:36:06.422031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:06.434140 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:36:06.435524 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:36:06.436470 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:36:06.437397 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:36:06.438291 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:36:06.439243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:36:06.440181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:36:06.441267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:36:06.442374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:36:06.443742 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:36:06.443982 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:36:06.445205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:36:06.445447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:36:06.446589 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:36:06.446821 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:36:06.447921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:36:06.448300 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:36:06.449578 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:36:06.449807 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:36:06.450990 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:36:06.451446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:36:06.452681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:36:06.453811 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:36:06.455124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:36:06.493542 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:36:06.501711 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:36:06.508182 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:36:06.509515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:36:06.509595 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:36:06.511276 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:36:06.517973 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:36:06.523235 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:36:06.524338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:36:06.537255 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:36:06.543608 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:36:06.544886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:36:06.547426 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:36:06.549200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:36:06.558240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:36:06.564231 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:36:06.568284 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:36:06.571919 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:36:06.574286 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:36:06.577164 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:36:06.588223 systemd-journald[1124]: Time spent on flushing to /var/log/journal/79d266eb24554dd2850d33f98d6ff048 is 100.924ms for 977 entries. Nov 8 00:36:06.588223 systemd-journald[1124]: System Journal (/var/log/journal/79d266eb24554dd2850d33f98d6ff048) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:36:06.719931 systemd-journald[1124]: Received client request to flush runtime journal. Nov 8 00:36:06.719993 kernel: loop0: detected capacity change from 0 to 229808 Nov 8 00:36:06.720025 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:36:06.596665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:36:06.726909 kernel: loop1: detected capacity change from 0 to 140768 Nov 8 00:36:06.606237 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:36:06.611068 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:36:06.613515 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:36:06.623409 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:36:06.666438 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:36:06.691237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:36:06.702476 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:36:06.708635 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:36:06.728612 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Nov 8 00:36:06.728625 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Nov 8 00:36:06.728810 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:36:06.753733 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:36:06.767279 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:36:06.788369 kernel: loop2: detected capacity change from 0 to 8 Nov 8 00:36:06.813234 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:36:06.817159 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:36:06.824226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:36:06.867139 kernel: loop4: detected capacity change from 0 to 229808 Nov 8 00:36:06.869785 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 8 00:36:06.869804 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 8 00:36:06.878913 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:36:06.895135 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:36:06.916513 kernel: loop6: detected capacity change from 0 to 8 Nov 8 00:36:06.922153 kernel: loop7: detected capacity change from 0 to 142488 Nov 8 00:36:06.936818 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 8 00:36:06.937798 (sd-merge)[1188]: Merged extensions into '/usr'. Nov 8 00:36:06.943577 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:36:06.943590 systemd[1]: Reloading... Nov 8 00:36:07.050141 zram_generator::config[1215]: No configuration found. Nov 8 00:36:07.134877 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:36:07.188106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:36:07.232381 systemd[1]: Reloading finished in 288 ms. Nov 8 00:36:07.257941 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:36:07.262971 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:36:07.270879 systemd[1]: Starting ensure-sysext.service... Nov 8 00:36:07.274456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:36:07.292941 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:36:07.292956 systemd[1]: Reloading... Nov 8 00:36:07.309813 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:36:07.310402 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:36:07.312064 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:36:07.312455 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 8 00:36:07.312588 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 8 00:36:07.316521 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:36:07.316596 systemd-tmpfiles[1259]: Skipping /boot Nov 8 00:36:07.328635 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:36:07.328706 systemd-tmpfiles[1259]: Skipping /boot Nov 8 00:36:07.395224 zram_generator::config[1295]: No configuration found. Nov 8 00:36:07.496187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:36:07.539203 systemd[1]: Reloading finished in 245 ms. Nov 8 00:36:07.559337 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:36:07.567620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:36:07.579373 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:36:07.585269 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:36:07.587275 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:36:07.596451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:36:07.601336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:36:07.606318 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:36:07.609719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:07.609891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:36:07.618605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:36:07.629321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:36:07.632299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:36:07.635277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:36:07.635377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:07.636496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:36:07.637064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:36:07.650258 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:36:07.655447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:36:07.655693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:36:07.660618 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:36:07.666260 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:07.667220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:36:07.676485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:36:07.682036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:36:07.683312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:36:07.686562 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:36:07.688167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:07.689418 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:36:07.689613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:36:07.698017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:07.698736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:36:07.703408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:36:07.711332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:36:07.713275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:36:07.713413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:36:07.714348 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:36:07.714654 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Nov 8 00:36:07.716643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:36:07.716818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:36:07.732061 systemd[1]: Finished ensure-sysext.service. Nov 8 00:36:07.742314 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:36:07.749376 augenrules[1368]: No rules Nov 8 00:36:07.753442 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:36:07.757930 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:36:07.762613 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:36:07.774496 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:36:07.774707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:36:07.776579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:36:07.776761 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:36:07.779019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:36:07.780561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:36:07.780925 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:36:07.781127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:36:07.782682 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:36:07.790323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:36:07.797790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:36:07.809309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:36:07.921769 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:36:07.928763 systemd-networkd[1389]: lo: Link UP Nov 8 00:36:07.928776 systemd-networkd[1389]: lo: Gained carrier Nov 8 00:36:07.929601 systemd-networkd[1389]: Enumeration completed Nov 8 00:36:07.929710 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:36:07.936483 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:36:07.958773 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:36:07.961257 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:36:07.964837 systemd-resolved[1337]: Positive Trust Anchors: Nov 8 00:36:07.966432 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:36:07.966465 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:36:07.974096 systemd-resolved[1337]: Defaulting to hostname 'linux'. Nov 8 00:36:07.977895 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:36:07.978861 systemd[1]: Reached target network.target - Network. Nov 8 00:36:07.980201 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:36:08.002045 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:36:08.002061 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:36:08.003000 systemd-networkd[1389]: eth0: Link UP Nov 8 00:36:08.003009 systemd-networkd[1389]: eth0: Gained carrier Nov 8 00:36:08.003021 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:36:08.017149 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1390) Nov 8 00:36:08.045151 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:36:08.062174 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:36:08.088143 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:36:08.105138 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:36:08.110413 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:36:08.110675 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:36:08.116175 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:36:08.148212 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:36:08.151803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:36:08.163423 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:36:08.170745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:36:08.183746 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:36:08.185323 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:36:08.194045 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:36:08.206982 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:36:08.234742 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:36:08.236726 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:36:08.243331 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:36:08.328514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:36:08.338475 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:36:08.339614 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:36:08.340922 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:36:08.342226 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:36:08.343169 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:36:08.344183 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:36:08.344547 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:36:08.345533 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:36:08.345632 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:36:08.346524 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:36:08.348775 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:36:08.352320 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:36:08.359177 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:36:08.360540 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:36:08.361384 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:36:08.362099 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:36:08.362852 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:36:08.362888 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:36:08.364146 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:36:08.368286 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:36:08.374261 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:36:08.384200 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:36:08.393419 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:36:08.396833 jq[1440]: false Nov 8 00:36:08.394186 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:36:08.397684 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:36:08.401281 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:36:08.407275 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:36:08.410246 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:36:08.420903 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:36:08.423623 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:36:08.424090 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:36:08.431302 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:36:08.437047 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:36:08.442186 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:36:08.451595 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:36:08.451922 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:36:08.462834 extend-filesystems[1442]: Found loop4 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found loop5 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found loop6 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found loop7 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda1 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda2 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda3 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found usr Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda4 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda6 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda7 Nov 8 00:36:08.465233 extend-filesystems[1442]: Found sda9 Nov 8 00:36:08.465233 extend-filesystems[1442]: Checking size of /dev/sda9 Nov 8 00:36:08.519194 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 8 00:36:08.519225 coreos-metadata[1438]: Nov 08 00:36:08.518 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 8 00:36:08.482200 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:36:08.523440 jq[1452]: true Nov 8 00:36:08.524146 extend-filesystems[1442]: Resized partition /dev/sda9 Nov 8 00:36:08.519601 dbus-daemon[1439]: [system] SELinux support is enabled Nov 8 00:36:08.482514 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:36:08.527441 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:36:08.488024 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:36:08.490965 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:36:08.532453 update_engine[1450]: I20251108 00:36:08.526984 1450 main.cc:92] Flatcar Update Engine starting Nov 8 00:36:08.523082 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:36:08.529530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:36:08.529583 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:36:08.533269 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:36:08.533295 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:36:08.551189 update_engine[1450]: I20251108 00:36:08.551023 1450 update_check_scheduler.cc:74] Next update check in 9m19s Nov 8 00:36:08.551713 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:36:08.555542 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:36:08.559810 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:36:08.561767 tar[1469]: linux-amd64/LICENSE Nov 8 00:36:08.561994 tar[1469]: linux-amd64/helm Nov 8 00:36:08.570164 jq[1470]: true Nov 8 00:36:08.595695 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1404) Nov 8 00:36:08.609669 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:36:08.609869 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:36:08.612665 systemd-logind[1449]: New seat seat0. Nov 8 00:36:08.613690 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:36:08.687989 systemd-networkd[1389]: eth0: DHCPv4 address 172.239.57.24/24, gateway 172.239.57.1 acquired from 23.215.118.212 Nov 8 00:36:08.690631 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Nov 8 00:36:08.695454 dbus-daemon[1439]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1389 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:36:08.718406 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:36:08.755322 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:36:08.757605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:36:08.775408 systemd[1]: Starting sshkeys.service... Nov 8 00:36:08.806277 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:36:08.815768 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:36:08.825698 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:36:08.851337 containerd[1471]: time="2025-11-08T00:36:08.850895265Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:36:09.429555 systemd-timesyncd[1370]: Contacted time server 23.186.168.133:123 (0.flatcar.pool.ntp.org). Nov 8 00:36:09.429609 systemd-timesyncd[1370]: Initial clock synchronization to Sat 2025-11-08 00:36:09.429371 UTC. Nov 8 00:36:09.429838 systemd-resolved[1337]: Clock change detected. Flushing caches. Nov 8 00:36:09.443081 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:36:09.444046 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:36:09.446675 dbus-daemon[1439]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1505 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:36:09.458535 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:36:09.485515 polkitd[1518]: Started polkitd version 121 Nov 8 00:36:09.486033 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:36:09.493220 polkitd[1518]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:36:09.493285 polkitd[1518]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:36:09.495488 containerd[1471]: time="2025-11-08T00:36:09.495451390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:36:09.495981 polkitd[1518]: Finished loading, compiling and executing 2 rules Nov 8 00:36:09.497400 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:36:09.497527 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:36:09.501387 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 8 00:36:09.500986 polkitd[1518]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:36:09.501791 containerd[1471]: time="2025-11-08T00:36:09.501760669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:36:09.515733 containerd[1471]: time="2025-11-08T00:36:09.501826469Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:36:09.515733 containerd[1471]: time="2025-11-08T00:36:09.501846589Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:36:09.516204 containerd[1471]: time="2025-11-08T00:36:09.516183361Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:36:09.516281 containerd[1471]: time="2025-11-08T00:36:09.516267121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:36:09.516480 containerd[1471]: time="2025-11-08T00:36:09.516458951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:36:09.516555 containerd[1471]: time="2025-11-08T00:36:09.516541861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:36:09.517039 containerd[1471]: time="2025-11-08T00:36:09.517007592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:36:09.517346 containerd[1471]: time="2025-11-08T00:36:09.517309212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:36:09.517450 containerd[1471]: time="2025-11-08T00:36:09.517434253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:36:09.517543 containerd[1471]: time="2025-11-08T00:36:09.517529683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:36:09.518473 extend-filesystems[1467]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:36:09.518473 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 8 00:36:09.518473 extend-filesystems[1467]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 8 00:36:09.551958 extend-filesystems[1442]: Resized filesystem in /dev/sda9 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.521077528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.523957652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.525726735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.525745575Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.550311322Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.550566592Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.555196679Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.555266939Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.555286529Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.555364550Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.555385960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.555772730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:36:09.559004 containerd[1471]: time="2025-11-08T00:36:09.557257242Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:36:09.559224 coreos-metadata[1513]: Nov 08 00:36:09.536 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 8 00:36:09.523117 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.557857723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.557882083Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.557901613Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.557922113Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.557942613Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.557977663Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.557996243Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.558016003Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.558037634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.558056064Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.558072954Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.558102084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.558122424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.559920 containerd[1471]: time="2025-11-08T00:36:09.558141564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.523769 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558159074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558174444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558192524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558214464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558232584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558248764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558269354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558285264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.558299344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.559701236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.559837596Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:36:09.560195 containerd[1471]: time="2025-11-08T00:36:09.559881986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.554308 systemd-hostnamed[1505]: Hostname set to <172-239-57-24> (transient) Nov 8 00:36:09.554784 systemd-resolved[1337]: System hostname changed to '172-239-57-24'. Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.560944008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561078538Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561155048Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561173488Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561185468Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561197598Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561228268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561242918Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561253958Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:36:09.561892 containerd[1471]: time="2025-11-08T00:36:09.561265268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:36:09.562351 containerd[1471]: time="2025-11-08T00:36:09.562108080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:36:09.562351 containerd[1471]: time="2025-11-08T00:36:09.562225070Z" level=info msg="Connect containerd service" Nov 8 00:36:09.562351 containerd[1471]: time="2025-11-08T00:36:09.562285910Z" level=info msg="using legacy CRI server" Nov 8 00:36:09.562351 containerd[1471]: time="2025-11-08T00:36:09.562294590Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:36:09.562883 containerd[1471]: time="2025-11-08T00:36:09.562692511Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:36:09.563809 containerd[1471]: time="2025-11-08T00:36:09.563759132Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:36:09.564133 containerd[1471]: time="2025-11-08T00:36:09.563894632Z" level=info msg="Start subscribing containerd event" Nov 8 00:36:09.564133 containerd[1471]: time="2025-11-08T00:36:09.563942042Z" level=info msg="Start recovering state" Nov 8 00:36:09.564133 containerd[1471]: time="2025-11-08T00:36:09.564001842Z" level=info msg="Start event monitor" Nov 8 00:36:09.564133 containerd[1471]: time="2025-11-08T00:36:09.564017742Z" level=info msg="Start snapshots syncer" Nov 8 00:36:09.564133 containerd[1471]: time="2025-11-08T00:36:09.564026883Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:36:09.564133 containerd[1471]: time="2025-11-08T00:36:09.564034403Z" level=info msg="Start streaming server" Nov 8 00:36:09.564833 containerd[1471]: time="2025-11-08T00:36:09.564816094Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:36:09.565009 containerd[1471]: time="2025-11-08T00:36:09.564995044Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:36:09.567211 containerd[1471]: time="2025-11-08T00:36:09.567031357Z" level=info msg="containerd successfully booted in 0.149877s" Nov 8 00:36:09.567098 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:36:09.576028 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:36:09.588071 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:36:09.599599 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:36:09.599821 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:36:09.609568 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:36:09.621596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:36:09.629954 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:36:09.639686 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:36:09.640678 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:36:09.649932 coreos-metadata[1513]: Nov 08 00:36:09.649 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 8 00:36:09.784961 coreos-metadata[1513]: Nov 08 00:36:09.784 INFO Fetch successful Nov 8 00:36:09.806756 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:36:09.808781 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:36:09.811571 systemd[1]: Finished sshkeys.service. Nov 8 00:36:09.882299 tar[1469]: linux-amd64/README.md Nov 8 00:36:09.897823 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:36:10.079423 coreos-metadata[1438]: Nov 08 00:36:10.079 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 8 00:36:10.169990 coreos-metadata[1438]: Nov 08 00:36:10.169 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 8 00:36:10.354591 coreos-metadata[1438]: Nov 08 00:36:10.354 INFO Fetch successful Nov 8 00:36:10.354591 coreos-metadata[1438]: Nov 08 00:36:10.354 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 8 00:36:10.597621 systemd-networkd[1389]: eth0: Gained IPv6LL Nov 8 00:36:10.601783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:36:10.603207 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:36:10.612560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:36:10.616187 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:36:10.639219 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:36:10.642085 coreos-metadata[1438]: Nov 08 00:36:10.641 INFO Fetch successful Nov 8 00:36:10.727356 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:36:10.729957 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:36:11.523356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:36:11.524835 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:36:11.556933 systemd[1]: Startup finished in 986ms (kernel) + 8.694s (initrd) + 5.625s (userspace) = 15.305s. Nov 8 00:36:11.563225 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:36:12.119492 kubelet[1593]: E1108 00:36:12.119414 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:36:12.123493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:36:12.123708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:36:12.501549 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:36:12.510577 systemd[1]: Started sshd@0-172.239.57.24:22-147.75.109.163:47784.service - OpenSSH per-connection server daemon (147.75.109.163:47784). Nov 8 00:36:12.865748 sshd[1605]: Accepted publickey for core from 147.75.109.163 port 47784 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:36:12.867919 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:12.877741 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:36:12.886535 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:36:12.889302 systemd-logind[1449]: New session 1 of user core. Nov 8 00:36:12.901361 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:36:12.916934 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:36:12.933067 (systemd)[1609]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:36:13.030760 systemd[1609]: Queued start job for default target default.target. Nov 8 00:36:13.038633 systemd[1609]: Created slice app.slice - User Application Slice. Nov 8 00:36:13.038659 systemd[1609]: Reached target paths.target - Paths. Nov 8 00:36:13.038672 systemd[1609]: Reached target timers.target - Timers. Nov 8 00:36:13.040460 systemd[1609]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:36:13.054102 systemd[1609]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:36:13.054225 systemd[1609]: Reached target sockets.target - Sockets. Nov 8 00:36:13.054240 systemd[1609]: Reached target basic.target - Basic System. Nov 8 00:36:13.054280 systemd[1609]: Reached target default.target - Main User Target. Nov 8 00:36:13.054316 systemd[1609]: Startup finished in 114ms. Nov 8 00:36:13.054447 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:36:13.056019 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:36:13.338630 systemd[1]: Started sshd@1-172.239.57.24:22-147.75.109.163:47788.service - OpenSSH per-connection server daemon (147.75.109.163:47788). Nov 8 00:36:13.695942 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 47788 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:36:13.697956 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:13.703417 systemd-logind[1449]: New session 2 of user core. Nov 8 00:36:13.710457 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:36:13.961563 sshd[1620]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:13.965702 systemd[1]: sshd@1-172.239.57.24:22-147.75.109.163:47788.service: Deactivated successfully. Nov 8 00:36:13.968714 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:36:13.970399 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:36:13.971910 systemd-logind[1449]: Removed session 2. Nov 8 00:36:14.041616 systemd[1]: Started sshd@2-172.239.57.24:22-147.75.109.163:47792.service - OpenSSH per-connection server daemon (147.75.109.163:47792). Nov 8 00:36:14.384442 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 47792 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:36:14.386976 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:14.394151 systemd-logind[1449]: New session 3 of user core. Nov 8 00:36:14.399443 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:36:14.640476 sshd[1627]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:14.646827 systemd[1]: sshd@2-172.239.57.24:22-147.75.109.163:47792.service: Deactivated successfully. Nov 8 00:36:14.649068 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:36:14.649750 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:36:14.650873 systemd-logind[1449]: Removed session 3. Nov 8 00:36:14.703550 systemd[1]: Started sshd@3-172.239.57.24:22-147.75.109.163:47794.service - OpenSSH per-connection server daemon (147.75.109.163:47794). Nov 8 00:36:15.040194 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 47794 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:36:15.042262 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:15.048274 systemd-logind[1449]: New session 4 of user core. Nov 8 00:36:15.052495 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:36:15.294431 sshd[1634]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:15.298674 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:36:15.299472 systemd[1]: sshd@3-172.239.57.24:22-147.75.109.163:47794.service: Deactivated successfully. Nov 8 00:36:15.301229 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:36:15.301995 systemd-logind[1449]: Removed session 4. Nov 8 00:36:15.357922 systemd[1]: Started sshd@4-172.239.57.24:22-147.75.109.163:47804.service - OpenSSH per-connection server daemon (147.75.109.163:47804). Nov 8 00:36:15.698866 sshd[1641]: Accepted publickey for core from 147.75.109.163 port 47804 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:36:15.700734 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:15.704801 systemd-logind[1449]: New session 5 of user core. Nov 8 00:36:15.710448 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:36:15.913193 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:36:15.913614 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:36:15.935420 sudo[1644]: pam_unix(sudo:session): session closed for user root Nov 8 00:36:15.989885 sshd[1641]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:15.994791 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:36:15.996012 systemd[1]: sshd@4-172.239.57.24:22-147.75.109.163:47804.service: Deactivated successfully. Nov 8 00:36:15.998295 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:36:15.999410 systemd-logind[1449]: Removed session 5. Nov 8 00:36:16.058115 systemd[1]: Started sshd@5-172.239.57.24:22-147.75.109.163:47814.service - OpenSSH per-connection server daemon (147.75.109.163:47814). Nov 8 00:36:16.404553 sshd[1649]: Accepted publickey for core from 147.75.109.163 port 47814 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:36:16.406581 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:16.413449 systemd-logind[1449]: New session 6 of user core. Nov 8 00:36:16.423495 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:36:16.612286 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:36:16.612684 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:36:16.617979 sudo[1653]: pam_unix(sudo:session): session closed for user root Nov 8 00:36:16.625143 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:36:16.625589 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:36:16.646550 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:36:16.648422 auditctl[1656]: No rules Nov 8 00:36:16.649266 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:36:16.649531 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:36:16.651371 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:36:16.679169 augenrules[1674]: No rules Nov 8 00:36:16.680868 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:36:16.682422 sudo[1652]: pam_unix(sudo:session): session closed for user root Nov 8 00:36:16.736652 sshd[1649]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:16.742284 systemd[1]: sshd@5-172.239.57.24:22-147.75.109.163:47814.service: Deactivated successfully. Nov 8 00:36:16.744802 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:36:16.745489 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:36:16.746556 systemd-logind[1449]: Removed session 6. Nov 8 00:36:16.795415 systemd[1]: Started sshd@6-172.239.57.24:22-147.75.109.163:47820.service - OpenSSH per-connection server daemon (147.75.109.163:47820). Nov 8 00:36:17.132773 sshd[1682]: Accepted publickey for core from 147.75.109.163 port 47820 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:36:17.134626 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:17.139553 systemd-logind[1449]: New session 7 of user core. Nov 8 00:36:17.145441 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:36:17.334538 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:36:17.334901 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:36:17.609838 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:36:17.619820 (dockerd)[1701]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:36:17.888710 dockerd[1701]: time="2025-11-08T00:36:17.888454238Z" level=info msg="Starting up" Nov 8 00:36:17.973988 dockerd[1701]: time="2025-11-08T00:36:17.973955986Z" level=info msg="Loading containers: start." Nov 8 00:36:18.089347 kernel: Initializing XFRM netlink socket Nov 8 00:36:18.164277 systemd-networkd[1389]: docker0: Link UP Nov 8 00:36:18.176694 dockerd[1701]: time="2025-11-08T00:36:18.176653751Z" level=info msg="Loading containers: done." Nov 8 00:36:18.190999 dockerd[1701]: time="2025-11-08T00:36:18.190592591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:36:18.190999 dockerd[1701]: time="2025-11-08T00:36:18.190673452Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:36:18.190999 dockerd[1701]: time="2025-11-08T00:36:18.190776152Z" level=info msg="Daemon has completed initialization" Nov 8 00:36:18.192673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1831098493-merged.mount: Deactivated successfully. Nov 8 00:36:18.217670 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:36:18.217781 dockerd[1701]: time="2025-11-08T00:36:18.217738002Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:36:19.194872 containerd[1471]: time="2025-11-08T00:36:19.194818818Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:36:20.026298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488426466.mount: Deactivated successfully. Nov 8 00:36:21.434380 containerd[1471]: time="2025-11-08T00:36:21.434166616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:21.435186 containerd[1471]: time="2025-11-08T00:36:21.435145298Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 8 00:36:21.436890 containerd[1471]: time="2025-11-08T00:36:21.435830719Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:21.438158 containerd[1471]: time="2025-11-08T00:36:21.438134792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:21.439139 containerd[1471]: time="2025-11-08T00:36:21.439117254Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.244260616s" Nov 8 00:36:21.439220 containerd[1471]: time="2025-11-08T00:36:21.439204534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:36:21.440179 containerd[1471]: time="2025-11-08T00:36:21.440150725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:36:22.374421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:36:22.382643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:36:22.570489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:36:22.575882 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:36:22.621071 kubelet[1910]: E1108 00:36:22.620675 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:36:22.626833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:36:22.627041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:36:22.882249 containerd[1471]: time="2025-11-08T00:36:22.881383927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:22.883154 containerd[1471]: time="2025-11-08T00:36:22.883110060Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 8 00:36:22.883810 containerd[1471]: time="2025-11-08T00:36:22.883745931Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:22.886252 containerd[1471]: time="2025-11-08T00:36:22.886214074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:22.887474 containerd[1471]: time="2025-11-08T00:36:22.887341276Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.44708585s" Nov 8 00:36:22.887474 containerd[1471]: time="2025-11-08T00:36:22.887375276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:36:22.888435 containerd[1471]: time="2025-11-08T00:36:22.888408088Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:36:24.108854 containerd[1471]: time="2025-11-08T00:36:24.107309186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:24.108854 containerd[1471]: time="2025-11-08T00:36:24.108556138Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 8 00:36:24.108854 containerd[1471]: time="2025-11-08T00:36:24.108801108Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:24.111610 containerd[1471]: time="2025-11-08T00:36:24.111580472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:24.112593 containerd[1471]: time="2025-11-08T00:36:24.112556574Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.224119616s" Nov 8 00:36:24.112628 containerd[1471]: time="2025-11-08T00:36:24.112594964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:36:24.114101 containerd[1471]: time="2025-11-08T00:36:24.114057956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:36:25.399471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626290706.mount: Deactivated successfully. Nov 8 00:36:25.748891 containerd[1471]: time="2025-11-08T00:36:25.748735888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:25.750617 containerd[1471]: time="2025-11-08T00:36:25.750551741Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 8 00:36:25.750792 containerd[1471]: time="2025-11-08T00:36:25.750768241Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:25.752992 containerd[1471]: time="2025-11-08T00:36:25.752955784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:25.753982 containerd[1471]: time="2025-11-08T00:36:25.753927636Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.6398311s" Nov 8 00:36:25.754169 containerd[1471]: time="2025-11-08T00:36:25.754124256Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:36:25.754855 containerd[1471]: time="2025-11-08T00:36:25.754685847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:36:26.402583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069178649.mount: Deactivated successfully. Nov 8 00:36:27.114663 containerd[1471]: time="2025-11-08T00:36:27.114282286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:27.115482 containerd[1471]: time="2025-11-08T00:36:27.115273257Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 8 00:36:27.115947 containerd[1471]: time="2025-11-08T00:36:27.115886208Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:27.118657 containerd[1471]: time="2025-11-08T00:36:27.118635843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:27.120270 containerd[1471]: time="2025-11-08T00:36:27.119640404Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.364927297s" Nov 8 00:36:27.120270 containerd[1471]: time="2025-11-08T00:36:27.119678904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:36:27.120591 containerd[1471]: time="2025-11-08T00:36:27.120575875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:36:27.747431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925930821.mount: Deactivated successfully. Nov 8 00:36:27.753527 containerd[1471]: time="2025-11-08T00:36:27.753489695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:27.754563 containerd[1471]: time="2025-11-08T00:36:27.754511106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:36:27.754979 containerd[1471]: time="2025-11-08T00:36:27.754950017Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:27.756804 containerd[1471]: time="2025-11-08T00:36:27.756769380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:27.757967 containerd[1471]: time="2025-11-08T00:36:27.757497161Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 636.850155ms" Nov 8 00:36:27.757967 containerd[1471]: time="2025-11-08T00:36:27.757527251Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:36:27.758449 containerd[1471]: time="2025-11-08T00:36:27.758427152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:36:28.388167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105566757.mount: Deactivated successfully. Nov 8 00:36:30.281211 containerd[1471]: time="2025-11-08T00:36:30.280231055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:30.281211 containerd[1471]: time="2025-11-08T00:36:30.281173406Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 8 00:36:30.281728 containerd[1471]: time="2025-11-08T00:36:30.281705557Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:30.284078 containerd[1471]: time="2025-11-08T00:36:30.284029360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:30.285703 containerd[1471]: time="2025-11-08T00:36:30.285652883Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.527198431s" Nov 8 00:36:30.285703 containerd[1471]: time="2025-11-08T00:36:30.285689693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:36:32.311588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:36:32.325826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:36:32.356801 systemd[1]: Reloading requested from client PID 2068 ('systemctl') (unit session-7.scope)... Nov 8 00:36:32.356814 systemd[1]: Reloading... Nov 8 00:36:32.507441 zram_generator::config[2117]: No configuration found. Nov 8 00:36:32.600665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:36:32.671418 systemd[1]: Reloading finished in 314 ms. Nov 8 00:36:32.722965 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:36:32.725509 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:36:32.725747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:36:32.731534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:36:32.888782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:36:32.892907 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:36:32.939233 kubelet[2164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:36:32.939233 kubelet[2164]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:36:32.939233 kubelet[2164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:36:32.939233 kubelet[2164]: I1108 00:36:32.938492 2164 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:36:33.174193 kubelet[2164]: I1108 00:36:33.174094 2164 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:36:33.174193 kubelet[2164]: I1108 00:36:33.174125 2164 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:36:33.174647 kubelet[2164]: I1108 00:36:33.174346 2164 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:36:33.197663 kubelet[2164]: I1108 00:36:33.197111 2164 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:36:33.197853 kubelet[2164]: E1108 00:36:33.197832 2164 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.57.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:36:33.209879 kubelet[2164]: E1108 00:36:33.209829 2164 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:36:33.209879 kubelet[2164]: I1108 00:36:33.209865 2164 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:36:33.214080 kubelet[2164]: I1108 00:36:33.214055 2164 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:36:33.214350 kubelet[2164]: I1108 00:36:33.214298 2164 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:36:33.214500 kubelet[2164]: I1108 00:36:33.214340 2164 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-57-24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:36:33.214500 kubelet[2164]: I1108 00:36:33.214496 2164 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:36:33.214605 kubelet[2164]: I1108 00:36:33.214505 2164 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:36:33.215404 kubelet[2164]: I1108 00:36:33.215375 2164 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:36:33.217697 kubelet[2164]: I1108 00:36:33.217591 2164 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:36:33.217697 kubelet[2164]: I1108 00:36:33.217611 2164 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:36:33.218419 kubelet[2164]: I1108 00:36:33.218200 2164 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:36:33.218419 kubelet[2164]: I1108 00:36:33.218226 2164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:36:33.224269 kubelet[2164]: E1108 00:36:33.224244 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.57.24:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-57-24&limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:36:33.224693 kubelet[2164]: I1108 00:36:33.224678 2164 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:36:33.225150 kubelet[2164]: I1108 00:36:33.225135 2164 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:36:33.225948 kubelet[2164]: W1108 00:36:33.225936 2164 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:36:33.229169 kubelet[2164]: I1108 00:36:33.229156 2164 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:36:33.229257 kubelet[2164]: I1108 00:36:33.229247 2164 server.go:1289] "Started kubelet" Nov 8 00:36:33.232971 kubelet[2164]: E1108 00:36:33.232742 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.57.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:36:33.233219 kubelet[2164]: I1108 00:36:33.233161 2164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:36:33.233463 kubelet[2164]: I1108 00:36:33.233425 2164 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:36:33.233646 kubelet[2164]: I1108 00:36:33.233630 2164 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:36:33.235287 kubelet[2164]: I1108 00:36:33.235261 2164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:36:33.239341 kubelet[2164]: E1108 00:36:33.236660 2164 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.57.24:6443/api/v1/namespaces/default/events\": dial tcp 172.239.57.24:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-57-24.1875e0f9edf60d94 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-57-24,UID:172-239-57-24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-57-24,},FirstTimestamp:2025-11-08 00:36:33.229221268 +0000 UTC m=+0.330631517,LastTimestamp:2025-11-08 00:36:33.229221268 +0000 UTC m=+0.330631517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-57-24,}" Nov 8 00:36:33.239341 kubelet[2164]: I1108 00:36:33.238616 2164 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:36:33.239341 kubelet[2164]: I1108 00:36:33.239183 2164 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:36:33.242239 kubelet[2164]: E1108 00:36:33.242225 2164 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:36:33.242502 kubelet[2164]: E1108 00:36:33.242490 2164 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-57-24\" not found" Nov 8 00:36:33.242582 kubelet[2164]: I1108 00:36:33.242572 2164 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:36:33.242806 kubelet[2164]: I1108 00:36:33.242791 2164 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:36:33.243000 kubelet[2164]: I1108 00:36:33.242989 2164 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:36:33.243550 kubelet[2164]: I1108 00:36:33.243536 2164 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:36:33.243666 kubelet[2164]: I1108 00:36:33.243651 2164 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:36:33.248828 kubelet[2164]: E1108 00:36:33.248575 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.57.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:36:33.248828 kubelet[2164]: E1108 00:36:33.248769 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-24?timeout=10s\": dial tcp 172.239.57.24:6443: connect: connection refused" interval="200ms" Nov 8 00:36:33.249550 kubelet[2164]: I1108 00:36:33.249535 2164 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:36:33.270661 kubelet[2164]: I1108 00:36:33.270632 2164 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:36:33.270661 kubelet[2164]: I1108 00:36:33.270653 2164 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:36:33.270725 kubelet[2164]: I1108 00:36:33.270669 2164 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:36:33.271879 kubelet[2164]: I1108 00:36:33.271850 2164 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:36:33.272021 kubelet[2164]: I1108 00:36:33.271993 2164 policy_none.go:49] "None policy: Start" Nov 8 00:36:33.272021 kubelet[2164]: I1108 00:36:33.272017 2164 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:36:33.272067 kubelet[2164]: I1108 00:36:33.272029 2164 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:36:33.273543 kubelet[2164]: I1108 00:36:33.273528 2164 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:36:33.273676 kubelet[2164]: I1108 00:36:33.273578 2164 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:36:33.273676 kubelet[2164]: I1108 00:36:33.273596 2164 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:36:33.273676 kubelet[2164]: I1108 00:36:33.273604 2164 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:36:33.273770 kubelet[2164]: E1108 00:36:33.273753 2164 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:36:33.278774 kubelet[2164]: E1108 00:36:33.278745 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.57.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:36:33.282841 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:36:33.296962 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:36:33.300992 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:36:33.308164 kubelet[2164]: E1108 00:36:33.308147 2164 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:36:33.308646 kubelet[2164]: I1108 00:36:33.308635 2164 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:36:33.308732 kubelet[2164]: I1108 00:36:33.308709 2164 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:36:33.309066 kubelet[2164]: I1108 00:36:33.309054 2164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:36:33.310896 kubelet[2164]: E1108 00:36:33.310865 2164 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:36:33.311254 kubelet[2164]: E1108 00:36:33.311214 2164 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-57-24\" not found" Nov 8 00:36:33.385253 systemd[1]: Created slice kubepods-burstable-podb21bb66ba7805430d2047a151e222cc4.slice - libcontainer container kubepods-burstable-podb21bb66ba7805430d2047a151e222cc4.slice. Nov 8 00:36:33.391405 kubelet[2164]: E1108 00:36:33.391163 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:33.393939 systemd[1]: Created slice kubepods-burstable-pode825e32b8db8a4f5f111fa22e1f1424c.slice - libcontainer container kubepods-burstable-pode825e32b8db8a4f5f111fa22e1f1424c.slice. Nov 8 00:36:33.396541 kubelet[2164]: E1108 00:36:33.396350 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:33.398580 systemd[1]: Created slice kubepods-burstable-pod81206ca2c8d07d8e02165dba3d0f4f8b.slice - libcontainer container kubepods-burstable-pod81206ca2c8d07d8e02165dba3d0f4f8b.slice. Nov 8 00:36:33.400295 kubelet[2164]: E1108 00:36:33.400276 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:33.410231 kubelet[2164]: I1108 00:36:33.410195 2164 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-24" Nov 8 00:36:33.410468 kubelet[2164]: E1108 00:36:33.410443 2164 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.24:6443/api/v1/nodes\": dial tcp 172.239.57.24:6443: connect: connection refused" node="172-239-57-24" Nov 8 00:36:33.449597 kubelet[2164]: E1108 00:36:33.449513 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-24?timeout=10s\": dial tcp 172.239.57.24:6443: connect: connection refused" interval="400ms" Nov 8 00:36:33.543965 kubelet[2164]: I1108 00:36:33.543901 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81206ca2c8d07d8e02165dba3d0f4f8b-ca-certs\") pod \"kube-apiserver-172-239-57-24\" (UID: \"81206ca2c8d07d8e02165dba3d0f4f8b\") " pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:33.543965 kubelet[2164]: I1108 00:36:33.543946 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-ca-certs\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:33.543965 kubelet[2164]: I1108 00:36:33.543965 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-flexvolume-dir\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:33.543965 kubelet[2164]: I1108 00:36:33.543984 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-k8s-certs\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:33.544191 kubelet[2164]: I1108 00:36:33.544000 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-kubeconfig\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:33.544191 kubelet[2164]: I1108 00:36:33.544014 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:33.544191 kubelet[2164]: I1108 00:36:33.544028 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81206ca2c8d07d8e02165dba3d0f4f8b-k8s-certs\") pod \"kube-apiserver-172-239-57-24\" (UID: \"81206ca2c8d07d8e02165dba3d0f4f8b\") " pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:33.544191 kubelet[2164]: I1108 00:36:33.544043 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81206ca2c8d07d8e02165dba3d0f4f8b-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-57-24\" (UID: \"81206ca2c8d07d8e02165dba3d0f4f8b\") " pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:33.544191 kubelet[2164]: I1108 00:36:33.544059 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e825e32b8db8a4f5f111fa22e1f1424c-kubeconfig\") pod \"kube-scheduler-172-239-57-24\" (UID: \"e825e32b8db8a4f5f111fa22e1f1424c\") " pod="kube-system/kube-scheduler-172-239-57-24" Nov 8 00:36:33.612930 kubelet[2164]: I1108 00:36:33.612873 2164 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-24" Nov 8 00:36:33.613234 kubelet[2164]: E1108 00:36:33.613190 2164 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.24:6443/api/v1/nodes\": dial tcp 172.239.57.24:6443: connect: connection refused" node="172-239-57-24" Nov 8 00:36:33.691879 kubelet[2164]: E1108 00:36:33.691851 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:33.692596 containerd[1471]: time="2025-11-08T00:36:33.692539183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-57-24,Uid:b21bb66ba7805430d2047a151e222cc4,Namespace:kube-system,Attempt:0,}" Nov 8 00:36:33.697023 kubelet[2164]: E1108 00:36:33.696742 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:33.697517 containerd[1471]: time="2025-11-08T00:36:33.697478170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-57-24,Uid:e825e32b8db8a4f5f111fa22e1f1424c,Namespace:kube-system,Attempt:0,}" Nov 8 00:36:33.700881 kubelet[2164]: E1108 00:36:33.700723 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:33.701044 containerd[1471]: time="2025-11-08T00:36:33.700968125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-57-24,Uid:81206ca2c8d07d8e02165dba3d0f4f8b,Namespace:kube-system,Attempt:0,}" Nov 8 00:36:33.850980 kubelet[2164]: E1108 00:36:33.850908 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-24?timeout=10s\": dial tcp 172.239.57.24:6443: connect: connection refused" interval="800ms" Nov 8 00:36:34.016450 kubelet[2164]: I1108 00:36:34.015738 2164 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-24" Nov 8 00:36:34.016450 kubelet[2164]: E1108 00:36:34.016094 2164 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.24:6443/api/v1/nodes\": dial tcp 172.239.57.24:6443: connect: connection refused" node="172-239-57-24" Nov 8 00:36:34.230611 kubelet[2164]: E1108 00:36:34.230551 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.57.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:36:34.302563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292760497.mount: Deactivated successfully. Nov 8 00:36:34.308224 containerd[1471]: time="2025-11-08T00:36:34.308172286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:36:34.308942 containerd[1471]: time="2025-11-08T00:36:34.308916747Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:36:34.310105 containerd[1471]: time="2025-11-08T00:36:34.310036279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:36:34.310105 containerd[1471]: time="2025-11-08T00:36:34.310085289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:36:34.310735 containerd[1471]: time="2025-11-08T00:36:34.310706420Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:36:34.312664 containerd[1471]: time="2025-11-08T00:36:34.311682451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:36:34.314278 containerd[1471]: time="2025-11-08T00:36:34.314075595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.434642ms" Nov 8 00:36:34.316082 containerd[1471]: time="2025-11-08T00:36:34.316045118Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:36:34.317161 containerd[1471]: time="2025-11-08T00:36:34.317120059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.106954ms" Nov 8 00:36:34.318000 containerd[1471]: time="2025-11-08T00:36:34.317964701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:36:34.324993 containerd[1471]: time="2025-11-08T00:36:34.324960411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.398031ms" Nov 8 00:36:34.382513 kubelet[2164]: E1108 00:36:34.382476 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.57.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:36:34.437460 containerd[1471]: time="2025-11-08T00:36:34.433692934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:34.437643 containerd[1471]: time="2025-11-08T00:36:34.437424670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:34.437730 containerd[1471]: time="2025-11-08T00:36:34.437612950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:34.438237 containerd[1471]: time="2025-11-08T00:36:34.438037311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:34.439598 containerd[1471]: time="2025-11-08T00:36:34.439509963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:34.440154 containerd[1471]: time="2025-11-08T00:36:34.440094094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:34.440212 containerd[1471]: time="2025-11-08T00:36:34.440171894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:34.442747 containerd[1471]: time="2025-11-08T00:36:34.442631928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:34.454468 kubelet[2164]: E1108 00:36:34.454431 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.57.24:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-57-24&limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:36:34.463685 containerd[1471]: time="2025-11-08T00:36:34.463590149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:34.463685 containerd[1471]: time="2025-11-08T00:36:34.463633719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:34.463685 containerd[1471]: time="2025-11-08T00:36:34.463644469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:34.466044 containerd[1471]: time="2025-11-08T00:36:34.463716669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:34.479464 systemd[1]: Started cri-containerd-533de1ba60b852501cc2901695a5badbdbd634d8a06dce2141cee15f39800cf9.scope - libcontainer container 533de1ba60b852501cc2901695a5badbdbd634d8a06dce2141cee15f39800cf9. Nov 8 00:36:34.491618 systemd[1]: Started cri-containerd-4a7f8ee4eb04278a11124ef239aad769e3677215debcc63093bf943e6c0732f5.scope - libcontainer container 4a7f8ee4eb04278a11124ef239aad769e3677215debcc63093bf943e6c0732f5. Nov 8 00:36:34.496227 systemd[1]: Started cri-containerd-ca3d86383e983d1fa38c44783e1cd5aba509da1b7ced70a2f258bc8950d26b39.scope - libcontainer container ca3d86383e983d1fa38c44783e1cd5aba509da1b7ced70a2f258bc8950d26b39. Nov 8 00:36:34.548252 containerd[1471]: time="2025-11-08T00:36:34.548120436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-57-24,Uid:e825e32b8db8a4f5f111fa22e1f1424c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca3d86383e983d1fa38c44783e1cd5aba509da1b7ced70a2f258bc8950d26b39\"" Nov 8 00:36:34.550420 kubelet[2164]: E1108 00:36:34.550189 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:34.557889 containerd[1471]: time="2025-11-08T00:36:34.557101929Z" level=info msg="CreateContainer within sandbox \"ca3d86383e983d1fa38c44783e1cd5aba509da1b7ced70a2f258bc8950d26b39\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:36:34.572536 containerd[1471]: time="2025-11-08T00:36:34.571869762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-57-24,Uid:b21bb66ba7805430d2047a151e222cc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"533de1ba60b852501cc2901695a5badbdbd634d8a06dce2141cee15f39800cf9\"" Nov 8 00:36:34.572590 containerd[1471]: time="2025-11-08T00:36:34.572526393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-57-24,Uid:81206ca2c8d07d8e02165dba3d0f4f8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a7f8ee4eb04278a11124ef239aad769e3677215debcc63093bf943e6c0732f5\"" Nov 8 00:36:34.573295 kubelet[2164]: E1108 00:36:34.573136 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:34.577714 kubelet[2164]: E1108 00:36:34.577583 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:34.580513 containerd[1471]: time="2025-11-08T00:36:34.580483695Z" level=info msg="CreateContainer within sandbox \"533de1ba60b852501cc2901695a5badbdbd634d8a06dce2141cee15f39800cf9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:36:34.583175 containerd[1471]: time="2025-11-08T00:36:34.582072647Z" level=info msg="CreateContainer within sandbox \"4a7f8ee4eb04278a11124ef239aad769e3677215debcc63093bf943e6c0732f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:36:34.588813 containerd[1471]: time="2025-11-08T00:36:34.588761897Z" level=info msg="CreateContainer within sandbox \"ca3d86383e983d1fa38c44783e1cd5aba509da1b7ced70a2f258bc8950d26b39\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8784528e4814d20681d7361743bf42a176281815495fd7ee6076837a3e9ded79\"" Nov 8 00:36:34.590272 containerd[1471]: time="2025-11-08T00:36:34.589339408Z" level=info msg="StartContainer for \"8784528e4814d20681d7361743bf42a176281815495fd7ee6076837a3e9ded79\"" Nov 8 00:36:34.597394 containerd[1471]: time="2025-11-08T00:36:34.597311460Z" level=info msg="CreateContainer within sandbox \"533de1ba60b852501cc2901695a5badbdbd634d8a06dce2141cee15f39800cf9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c92452bab13b538f5ae7ac2356597b5b2806de2a1c905a10fa1b915bb46e0e92\"" Nov 8 00:36:34.598144 containerd[1471]: time="2025-11-08T00:36:34.598115231Z" level=info msg="StartContainer for \"c92452bab13b538f5ae7ac2356597b5b2806de2a1c905a10fa1b915bb46e0e92\"" Nov 8 00:36:34.602117 containerd[1471]: time="2025-11-08T00:36:34.601991697Z" level=info msg="CreateContainer within sandbox \"4a7f8ee4eb04278a11124ef239aad769e3677215debcc63093bf943e6c0732f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ac08332d97e925316a460be5f6fd46a6a50576ebf2261fa278e1d819625605a4\"" Nov 8 00:36:34.602551 containerd[1471]: time="2025-11-08T00:36:34.602532138Z" level=info msg="StartContainer for \"ac08332d97e925316a460be5f6fd46a6a50576ebf2261fa278e1d819625605a4\"" Nov 8 00:36:34.624463 systemd[1]: Started cri-containerd-8784528e4814d20681d7361743bf42a176281815495fd7ee6076837a3e9ded79.scope - libcontainer container 8784528e4814d20681d7361743bf42a176281815495fd7ee6076837a3e9ded79. Nov 8 00:36:34.650449 systemd[1]: Started cri-containerd-ac08332d97e925316a460be5f6fd46a6a50576ebf2261fa278e1d819625605a4.scope - libcontainer container ac08332d97e925316a460be5f6fd46a6a50576ebf2261fa278e1d819625605a4. Nov 8 00:36:34.651644 systemd[1]: Started cri-containerd-c92452bab13b538f5ae7ac2356597b5b2806de2a1c905a10fa1b915bb46e0e92.scope - libcontainer container c92452bab13b538f5ae7ac2356597b5b2806de2a1c905a10fa1b915bb46e0e92. Nov 8 00:36:34.652248 kubelet[2164]: E1108 00:36:34.652216 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-24?timeout=10s\": dial tcp 172.239.57.24:6443: connect: connection refused" interval="1.6s" Nov 8 00:36:34.695261 containerd[1471]: time="2025-11-08T00:36:34.695229177Z" level=info msg="StartContainer for \"8784528e4814d20681d7361743bf42a176281815495fd7ee6076837a3e9ded79\" returns successfully" Nov 8 00:36:34.719482 containerd[1471]: time="2025-11-08T00:36:34.719222933Z" level=info msg="StartContainer for \"ac08332d97e925316a460be5f6fd46a6a50576ebf2261fa278e1d819625605a4\" returns successfully" Nov 8 00:36:34.746937 containerd[1471]: time="2025-11-08T00:36:34.746897804Z" level=info msg="StartContainer for \"c92452bab13b538f5ae7ac2356597b5b2806de2a1c905a10fa1b915bb46e0e92\" returns successfully" Nov 8 00:36:34.757965 kubelet[2164]: E1108 00:36:34.757919 2164 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.57.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.57.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:36:34.820258 kubelet[2164]: I1108 00:36:34.819507 2164 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-24" Nov 8 00:36:35.309730 kubelet[2164]: E1108 00:36:35.309619 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:35.310050 kubelet[2164]: E1108 00:36:35.309763 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:35.312347 kubelet[2164]: E1108 00:36:35.311512 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:35.312347 kubelet[2164]: E1108 00:36:35.311603 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:35.317644 kubelet[2164]: E1108 00:36:35.317612 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:35.317751 kubelet[2164]: E1108 00:36:35.317730 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:36.321648 kubelet[2164]: E1108 00:36:36.321405 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:36.322035 kubelet[2164]: E1108 00:36:36.321717 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:36.322035 kubelet[2164]: E1108 00:36:36.321923 2164 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:36.322035 kubelet[2164]: E1108 00:36:36.321994 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:37.056773 kubelet[2164]: E1108 00:36:37.056728 2164 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-57-24\" not found" node="172-239-57-24" Nov 8 00:36:37.132484 kubelet[2164]: I1108 00:36:37.132450 2164 kubelet_node_status.go:78] "Successfully registered node" node="172-239-57-24" Nov 8 00:36:37.132484 kubelet[2164]: E1108 00:36:37.132479 2164 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-239-57-24\": node \"172-239-57-24\" not found" Nov 8 00:36:37.148733 kubelet[2164]: I1108 00:36:37.148694 2164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-24" Nov 8 00:36:37.164630 kubelet[2164]: E1108 00:36:37.164408 2164 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-57-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-57-24" Nov 8 00:36:37.164703 kubelet[2164]: I1108 00:36:37.164637 2164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:37.169065 kubelet[2164]: E1108 00:36:37.169034 2164 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-57-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:37.169065 kubelet[2164]: I1108 00:36:37.169057 2164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:37.172456 kubelet[2164]: E1108 00:36:37.172426 2164 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-57-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:37.233032 kubelet[2164]: I1108 00:36:37.232371 2164 apiserver.go:52] "Watching apiserver" Nov 8 00:36:37.243231 kubelet[2164]: I1108 00:36:37.243210 2164 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:36:39.353049 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Nov 8 00:36:39.353066 systemd[1]: Reloading... Nov 8 00:36:39.437365 zram_generator::config[2491]: No configuration found. Nov 8 00:36:39.557295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:36:39.641772 systemd[1]: Reloading finished in 288 ms. Nov 8 00:36:39.680972 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:36:39.694138 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:36:39.710476 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:36:39.710732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:36:39.717681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:36:39.863991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:36:39.873699 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:36:39.919936 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:36:39.919936 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:36:39.919936 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:36:39.919936 kubelet[2549]: I1108 00:36:39.919650 2549 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:36:39.925091 kubelet[2549]: I1108 00:36:39.925066 2549 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:36:39.925091 kubelet[2549]: I1108 00:36:39.925085 2549 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:36:39.925275 kubelet[2549]: I1108 00:36:39.925255 2549 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:36:39.926218 kubelet[2549]: I1108 00:36:39.926199 2549 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:36:39.928414 kubelet[2549]: I1108 00:36:39.928095 2549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:36:39.931436 kubelet[2549]: E1108 00:36:39.931408 2549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:36:39.931436 kubelet[2549]: I1108 00:36:39.931433 2549 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:36:39.935506 kubelet[2549]: I1108 00:36:39.935395 2549 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:36:39.935630 kubelet[2549]: I1108 00:36:39.935592 2549 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:36:39.935734 kubelet[2549]: I1108 00:36:39.935620 2549 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-57-24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:36:39.935734 kubelet[2549]: I1108 00:36:39.935732 2549 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:36:39.935836 kubelet[2549]: I1108 00:36:39.935741 2549 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:36:39.935836 kubelet[2549]: I1108 00:36:39.935777 2549 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:36:39.936369 kubelet[2549]: I1108 00:36:39.935903 2549 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:36:39.936369 kubelet[2549]: I1108 00:36:39.935917 2549 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:36:39.936369 kubelet[2549]: I1108 00:36:39.935940 2549 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:36:39.936369 kubelet[2549]: I1108 00:36:39.935953 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:36:39.938562 kubelet[2549]: I1108 00:36:39.938546 2549 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:36:39.938971 kubelet[2549]: I1108 00:36:39.938951 2549 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:36:39.943739 kubelet[2549]: I1108 00:36:39.943691 2549 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:36:39.943903 kubelet[2549]: I1108 00:36:39.943892 2549 server.go:1289] "Started kubelet" Nov 8 00:36:39.945610 kubelet[2549]: I1108 00:36:39.945597 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:36:39.949750 kubelet[2549]: I1108 00:36:39.949717 2549 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:36:39.952970 kubelet[2549]: I1108 00:36:39.952949 2549 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:36:39.955671 kubelet[2549]: I1108 00:36:39.955652 2549 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:36:39.956427 kubelet[2549]: E1108 00:36:39.955794 2549 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-57-24\" not found" Nov 8 00:36:39.956999 kubelet[2549]: I1108 00:36:39.956977 2549 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:36:39.957107 kubelet[2549]: I1108 00:36:39.957090 2549 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:36:39.958997 kubelet[2549]: I1108 00:36:39.958971 2549 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:36:39.959290 kubelet[2549]: I1108 00:36:39.959141 2549 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:36:39.959290 kubelet[2549]: I1108 00:36:39.959183 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:36:39.959641 kubelet[2549]: I1108 00:36:39.959454 2549 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:36:39.959686 kubelet[2549]: I1108 00:36:39.959647 2549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:36:39.964090 kubelet[2549]: E1108 00:36:39.964062 2549 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:36:39.965516 kubelet[2549]: I1108 00:36:39.965486 2549 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:36:39.968626 kubelet[2549]: I1108 00:36:39.968603 2549 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:36:39.969831 kubelet[2549]: I1108 00:36:39.969817 2549 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:36:39.969920 kubelet[2549]: I1108 00:36:39.969909 2549 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:36:39.969997 kubelet[2549]: I1108 00:36:39.969988 2549 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:36:39.970038 kubelet[2549]: I1108 00:36:39.970031 2549 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:36:39.970140 kubelet[2549]: E1108 00:36:39.970125 2549 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:36:40.010241 kubelet[2549]: I1108 00:36:40.010207 2549 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:36:40.010363 kubelet[2549]: I1108 00:36:40.010263 2549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:36:40.010363 kubelet[2549]: I1108 00:36:40.010282 2549 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:36:40.010481 kubelet[2549]: I1108 00:36:40.010465 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:36:40.010503 kubelet[2549]: I1108 00:36:40.010480 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:36:40.010526 kubelet[2549]: I1108 00:36:40.010522 2549 policy_none.go:49] "None policy: Start" Nov 8 00:36:40.010549 kubelet[2549]: I1108 00:36:40.010531 2549 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:36:40.010549 kubelet[2549]: I1108 00:36:40.010543 2549 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:36:40.010665 kubelet[2549]: I1108 00:36:40.010652 2549 state_mem.go:75] "Updated machine memory state" Nov 8 00:36:40.014647 kubelet[2549]: E1108 00:36:40.014612 2549 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:36:40.014792 kubelet[2549]: I1108 00:36:40.014773 2549 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:36:40.014835 kubelet[2549]: I1108 00:36:40.014789 2549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:36:40.015270 kubelet[2549]: I1108 00:36:40.015181 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:36:40.016606 kubelet[2549]: E1108 00:36:40.016579 2549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:36:40.070899 kubelet[2549]: I1108 00:36:40.070870 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-24" Nov 8 00:36:40.071615 kubelet[2549]: I1108 00:36:40.071143 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:40.071615 kubelet[2549]: I1108 00:36:40.071300 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:40.117165 kubelet[2549]: I1108 00:36:40.117129 2549 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-24" Nov 8 00:36:40.124336 kubelet[2549]: I1108 00:36:40.124303 2549 kubelet_node_status.go:124] "Node was previously registered" node="172-239-57-24" Nov 8 00:36:40.124413 kubelet[2549]: I1108 00:36:40.124390 2549 kubelet_node_status.go:78] "Successfully registered node" node="172-239-57-24" Nov 8 00:36:40.258882 kubelet[2549]: I1108 00:36:40.258602 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-ca-certs\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:40.258882 kubelet[2549]: I1108 00:36:40.258639 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-kubeconfig\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:40.258882 kubelet[2549]: I1108 00:36:40.258659 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:40.258882 kubelet[2549]: I1108 00:36:40.258678 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e825e32b8db8a4f5f111fa22e1f1424c-kubeconfig\") pod \"kube-scheduler-172-239-57-24\" (UID: \"e825e32b8db8a4f5f111fa22e1f1424c\") " pod="kube-system/kube-scheduler-172-239-57-24" Nov 8 00:36:40.258882 kubelet[2549]: I1108 00:36:40.258693 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81206ca2c8d07d8e02165dba3d0f4f8b-ca-certs\") pod \"kube-apiserver-172-239-57-24\" (UID: \"81206ca2c8d07d8e02165dba3d0f4f8b\") " pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:40.259079 kubelet[2549]: I1108 00:36:40.258713 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81206ca2c8d07d8e02165dba3d0f4f8b-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-57-24\" (UID: \"81206ca2c8d07d8e02165dba3d0f4f8b\") " pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:40.259079 kubelet[2549]: I1108 00:36:40.258730 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-flexvolume-dir\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:40.259079 kubelet[2549]: I1108 00:36:40.258746 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21bb66ba7805430d2047a151e222cc4-k8s-certs\") pod \"kube-controller-manager-172-239-57-24\" (UID: \"b21bb66ba7805430d2047a151e222cc4\") " pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:40.259079 kubelet[2549]: I1108 00:36:40.258778 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81206ca2c8d07d8e02165dba3d0f4f8b-k8s-certs\") pod \"kube-apiserver-172-239-57-24\" (UID: \"81206ca2c8d07d8e02165dba3d0f4f8b\") " pod="kube-system/kube-apiserver-172-239-57-24" Nov 8 00:36:40.380457 kubelet[2549]: E1108 00:36:40.379964 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:40.380457 kubelet[2549]: E1108 00:36:40.380235 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:40.381798 kubelet[2549]: E1108 00:36:40.381757 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:40.940352 kubelet[2549]: I1108 00:36:40.940011 2549 apiserver.go:52] "Watching apiserver" Nov 8 00:36:40.957788 kubelet[2549]: I1108 00:36:40.957751 2549 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:36:40.992172 kubelet[2549]: E1108 00:36:40.992014 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:40.992842 kubelet[2549]: I1108 00:36:40.992815 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:40.993798 kubelet[2549]: I1108 00:36:40.993338 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-24" Nov 8 00:36:41.004114 kubelet[2549]: E1108 00:36:41.004092 2549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-57-24\" already exists" pod="kube-system/kube-controller-manager-172-239-57-24" Nov 8 00:36:41.004591 kubelet[2549]: E1108 00:36:41.004573 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:41.007867 kubelet[2549]: E1108 00:36:41.007797 2549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-57-24\" already exists" pod="kube-system/kube-scheduler-172-239-57-24" Nov 8 00:36:41.007999 kubelet[2549]: E1108 00:36:41.007906 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:41.029295 kubelet[2549]: I1108 00:36:41.029213 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-57-24" podStartSLOduration=1.029198337 podStartE2EDuration="1.029198337s" podCreationTimestamp="2025-11-08 00:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:36:41.021810626 +0000 UTC m=+1.144086677" watchObservedRunningTime="2025-11-08 00:36:41.029198337 +0000 UTC m=+1.151474388" Nov 8 00:36:41.029641 kubelet[2549]: I1108 00:36:41.029363 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-57-24" podStartSLOduration=1.029358587 podStartE2EDuration="1.029358587s" podCreationTimestamp="2025-11-08 00:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:36:41.028916956 +0000 UTC m=+1.151193007" watchObservedRunningTime="2025-11-08 00:36:41.029358587 +0000 UTC m=+1.151634638" Nov 8 00:36:41.994144 kubelet[2549]: E1108 00:36:41.993749 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:41.994144 kubelet[2549]: E1108 00:36:41.993837 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:41.994144 kubelet[2549]: E1108 00:36:41.994142 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:42.995003 kubelet[2549]: E1108 00:36:42.994953 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:46.085738 kubelet[2549]: I1108 00:36:46.085692 2549 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:36:46.087101 kubelet[2549]: I1108 00:36:46.086261 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:36:46.087152 containerd[1471]: time="2025-11-08T00:36:46.086063512Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:36:46.905467 kubelet[2549]: I1108 00:36:46.905293 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-57-24" podStartSLOduration=6.9052748600000005 podStartE2EDuration="6.90527486s" podCreationTimestamp="2025-11-08 00:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:36:41.03794623 +0000 UTC m=+1.160222281" watchObservedRunningTime="2025-11-08 00:36:46.90527486 +0000 UTC m=+7.027550911" Nov 8 00:36:46.921092 systemd[1]: Created slice kubepods-besteffort-podbe6162df_2f0c_469f_928e_df7d5bddf3d3.slice - libcontainer container kubepods-besteffort-podbe6162df_2f0c_469f_928e_df7d5bddf3d3.slice. Nov 8 00:36:47.000434 kubelet[2549]: I1108 00:36:47.000393 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be6162df-2f0c-469f-928e-df7d5bddf3d3-lib-modules\") pod \"kube-proxy-lw2zk\" (UID: \"be6162df-2f0c-469f-928e-df7d5bddf3d3\") " pod="kube-system/kube-proxy-lw2zk" Nov 8 00:36:47.000434 kubelet[2549]: I1108 00:36:47.000429 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2282\" (UniqueName: \"kubernetes.io/projected/be6162df-2f0c-469f-928e-df7d5bddf3d3-kube-api-access-f2282\") pod \"kube-proxy-lw2zk\" (UID: \"be6162df-2f0c-469f-928e-df7d5bddf3d3\") " pod="kube-system/kube-proxy-lw2zk" Nov 8 00:36:47.000612 kubelet[2549]: I1108 00:36:47.000453 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be6162df-2f0c-469f-928e-df7d5bddf3d3-kube-proxy\") pod \"kube-proxy-lw2zk\" (UID: \"be6162df-2f0c-469f-928e-df7d5bddf3d3\") " pod="kube-system/kube-proxy-lw2zk" Nov 8 00:36:47.000612 kubelet[2549]: I1108 00:36:47.000468 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be6162df-2f0c-469f-928e-df7d5bddf3d3-xtables-lock\") pod \"kube-proxy-lw2zk\" (UID: \"be6162df-2f0c-469f-928e-df7d5bddf3d3\") " pod="kube-system/kube-proxy-lw2zk" Nov 8 00:36:47.235356 kubelet[2549]: E1108 00:36:47.235226 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:47.237484 containerd[1471]: time="2025-11-08T00:36:47.236656407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lw2zk,Uid:be6162df-2f0c-469f-928e-df7d5bddf3d3,Namespace:kube-system,Attempt:0,}" Nov 8 00:36:47.280122 containerd[1471]: time="2025-11-08T00:36:47.280028112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:47.280499 containerd[1471]: time="2025-11-08T00:36:47.280310013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:47.280499 containerd[1471]: time="2025-11-08T00:36:47.280374733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:47.280900 containerd[1471]: time="2025-11-08T00:36:47.280722594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:47.323713 systemd[1]: Started cri-containerd-390befc47ad65300724c9b805912c1b47f68b5b4a1e125c64081610fcc30e54c.scope - libcontainer container 390befc47ad65300724c9b805912c1b47f68b5b4a1e125c64081610fcc30e54c. Nov 8 00:36:47.351056 systemd[1]: Created slice kubepods-besteffort-pod12f74cf9_cce6_48df_99a1_c54b794ef313.slice - libcontainer container kubepods-besteffort-pod12f74cf9_cce6_48df_99a1_c54b794ef313.slice. Nov 8 00:36:47.403091 kubelet[2549]: I1108 00:36:47.402820 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12f74cf9-cce6-48df-99a1-c54b794ef313-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lq5gm\" (UID: \"12f74cf9-cce6-48df-99a1-c54b794ef313\") " pod="tigera-operator/tigera-operator-7dcd859c48-lq5gm" Nov 8 00:36:47.403091 kubelet[2549]: I1108 00:36:47.402860 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvlq\" (UniqueName: \"kubernetes.io/projected/12f74cf9-cce6-48df-99a1-c54b794ef313-kube-api-access-ttvlq\") pod \"tigera-operator-7dcd859c48-lq5gm\" (UID: \"12f74cf9-cce6-48df-99a1-c54b794ef313\") " pod="tigera-operator/tigera-operator-7dcd859c48-lq5gm" Nov 8 00:36:47.412651 containerd[1471]: time="2025-11-08T00:36:47.412613101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lw2zk,Uid:be6162df-2f0c-469f-928e-df7d5bddf3d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"390befc47ad65300724c9b805912c1b47f68b5b4a1e125c64081610fcc30e54c\"" Nov 8 00:36:47.414241 kubelet[2549]: E1108 00:36:47.413650 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:47.418108 containerd[1471]: time="2025-11-08T00:36:47.418068170Z" level=info msg="CreateContainer within sandbox \"390befc47ad65300724c9b805912c1b47f68b5b4a1e125c64081610fcc30e54c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:36:47.433133 containerd[1471]: time="2025-11-08T00:36:47.433108152Z" level=info msg="CreateContainer within sandbox \"390befc47ad65300724c9b805912c1b47f68b5b4a1e125c64081610fcc30e54c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff6c2a8d30a8c8993bdbc91891ebb33115f8b8e2a5c25f642fdd45acaf44bc08\"" Nov 8 00:36:47.433866 containerd[1471]: time="2025-11-08T00:36:47.433833833Z" level=info msg="StartContainer for \"ff6c2a8d30a8c8993bdbc91891ebb33115f8b8e2a5c25f642fdd45acaf44bc08\"" Nov 8 00:36:47.463456 systemd[1]: Started cri-containerd-ff6c2a8d30a8c8993bdbc91891ebb33115f8b8e2a5c25f642fdd45acaf44bc08.scope - libcontainer container ff6c2a8d30a8c8993bdbc91891ebb33115f8b8e2a5c25f642fdd45acaf44bc08. Nov 8 00:36:47.496905 containerd[1471]: time="2025-11-08T00:36:47.496762098Z" level=info msg="StartContainer for \"ff6c2a8d30a8c8993bdbc91891ebb33115f8b8e2a5c25f642fdd45acaf44bc08\" returns successfully" Nov 8 00:36:47.656570 containerd[1471]: time="2025-11-08T00:36:47.656530377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lq5gm,Uid:12f74cf9-cce6-48df-99a1-c54b794ef313,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:36:47.677404 containerd[1471]: time="2025-11-08T00:36:47.677022438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:47.677404 containerd[1471]: time="2025-11-08T00:36:47.677115658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:47.677404 containerd[1471]: time="2025-11-08T00:36:47.677142218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:47.677404 containerd[1471]: time="2025-11-08T00:36:47.677242338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:47.697493 systemd[1]: Started cri-containerd-6ec428651b1ae00c204a41eefd1c06217b603e56db5eed45dc637fc200c3f80f.scope - libcontainer container 6ec428651b1ae00c204a41eefd1c06217b603e56db5eed45dc637fc200c3f80f. Nov 8 00:36:47.737749 containerd[1471]: time="2025-11-08T00:36:47.737648739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lq5gm,Uid:12f74cf9-cce6-48df-99a1-c54b794ef313,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ec428651b1ae00c204a41eefd1c06217b603e56db5eed45dc637fc200c3f80f\"" Nov 8 00:36:47.741883 containerd[1471]: time="2025-11-08T00:36:47.741675225Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:36:48.005524 kubelet[2549]: E1108 00:36:48.005439 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:48.014885 kubelet[2549]: I1108 00:36:48.014754 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lw2zk" podStartSLOduration=2.014739884 podStartE2EDuration="2.014739884s" podCreationTimestamp="2025-11-08 00:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:36:48.014589934 +0000 UTC m=+8.136865985" watchObservedRunningTime="2025-11-08 00:36:48.014739884 +0000 UTC m=+8.137015945" Nov 8 00:36:48.113724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191123070.mount: Deactivated successfully. Nov 8 00:36:48.406506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168020774.mount: Deactivated successfully. Nov 8 00:36:49.315943 containerd[1471]: time="2025-11-08T00:36:49.315895896Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:49.316818 containerd[1471]: time="2025-11-08T00:36:49.316596347Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:36:49.317985 containerd[1471]: time="2025-11-08T00:36:49.317352858Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:49.319905 containerd[1471]: time="2025-11-08T00:36:49.319871892Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:36:49.320585 containerd[1471]: time="2025-11-08T00:36:49.320556853Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.578847638s" Nov 8 00:36:49.320659 containerd[1471]: time="2025-11-08T00:36:49.320644503Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:36:49.323896 containerd[1471]: time="2025-11-08T00:36:49.323846658Z" level=info msg="CreateContainer within sandbox \"6ec428651b1ae00c204a41eefd1c06217b603e56db5eed45dc637fc200c3f80f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:36:49.335756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324641053.mount: Deactivated successfully. Nov 8 00:36:49.346664 containerd[1471]: time="2025-11-08T00:36:49.346571552Z" level=info msg="CreateContainer within sandbox \"6ec428651b1ae00c204a41eefd1c06217b603e56db5eed45dc637fc200c3f80f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"312d81c44791a83faa3a37a49b1d2bf92f033a621c809377da4c1910c75519d2\"" Nov 8 00:36:49.347514 containerd[1471]: time="2025-11-08T00:36:49.347048293Z" level=info msg="StartContainer for \"312d81c44791a83faa3a37a49b1d2bf92f033a621c809377da4c1910c75519d2\"" Nov 8 00:36:49.383461 systemd[1]: Started cri-containerd-312d81c44791a83faa3a37a49b1d2bf92f033a621c809377da4c1910c75519d2.scope - libcontainer container 312d81c44791a83faa3a37a49b1d2bf92f033a621c809377da4c1910c75519d2. Nov 8 00:36:49.419407 containerd[1471]: time="2025-11-08T00:36:49.419254671Z" level=info msg="StartContainer for \"312d81c44791a83faa3a37a49b1d2bf92f033a621c809377da4c1910c75519d2\" returns successfully" Nov 8 00:36:49.441725 kubelet[2549]: E1108 00:36:49.441242 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:50.011218 kubelet[2549]: E1108 00:36:50.010895 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:50.221481 kubelet[2549]: E1108 00:36:50.221182 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:50.236218 kubelet[2549]: I1108 00:36:50.235947 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lq5gm" podStartSLOduration=1.654528414 podStartE2EDuration="3.235931266s" podCreationTimestamp="2025-11-08 00:36:47 +0000 UTC" firstStartedPulling="2025-11-08 00:36:47.739895292 +0000 UTC m=+7.862171353" lastFinishedPulling="2025-11-08 00:36:49.321298144 +0000 UTC m=+9.443574205" observedRunningTime="2025-11-08 00:36:50.031137329 +0000 UTC m=+10.153413380" watchObservedRunningTime="2025-11-08 00:36:50.235931266 +0000 UTC m=+10.358207317" Nov 8 00:36:51.012542 kubelet[2549]: E1108 00:36:51.012493 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:51.044423 kubelet[2549]: E1108 00:36:51.044379 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:52.014084 kubelet[2549]: E1108 00:36:52.014032 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:52.016580 kubelet[2549]: E1108 00:36:52.016545 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:54.256405 update_engine[1450]: I20251108 00:36:54.256351 1450 update_attempter.cc:509] Updating boot flags... Nov 8 00:36:54.299457 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2928) Nov 8 00:36:54.376415 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2930) Nov 8 00:36:55.040776 sudo[1685]: pam_unix(sudo:session): session closed for user root Nov 8 00:36:55.093170 sshd[1682]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:55.099761 systemd[1]: sshd@6-172.239.57.24:22-147.75.109.163:47820.service: Deactivated successfully. Nov 8 00:36:55.102153 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:36:55.102529 systemd[1]: session-7.scope: Consumed 3.872s CPU time, 161.3M memory peak, 0B memory swap peak. Nov 8 00:36:55.108080 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:36:55.109883 systemd-logind[1449]: Removed session 7. Nov 8 00:36:59.320871 systemd[1]: Created slice kubepods-besteffort-pode323b54b_ff31_466f_b2b9_e050cee8af67.slice - libcontainer container kubepods-besteffort-pode323b54b_ff31_466f_b2b9_e050cee8af67.slice. Nov 8 00:36:59.375920 kubelet[2549]: I1108 00:36:59.375824 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e323b54b-ff31-466f-b2b9-e050cee8af67-tigera-ca-bundle\") pod \"calico-typha-67ff6d8d64-knrhb\" (UID: \"e323b54b-ff31-466f-b2b9-e050cee8af67\") " pod="calico-system/calico-typha-67ff6d8d64-knrhb" Nov 8 00:36:59.375920 kubelet[2549]: I1108 00:36:59.375893 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmcv5\" (UniqueName: \"kubernetes.io/projected/e323b54b-ff31-466f-b2b9-e050cee8af67-kube-api-access-zmcv5\") pod \"calico-typha-67ff6d8d64-knrhb\" (UID: \"e323b54b-ff31-466f-b2b9-e050cee8af67\") " pod="calico-system/calico-typha-67ff6d8d64-knrhb" Nov 8 00:36:59.375920 kubelet[2549]: I1108 00:36:59.375914 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e323b54b-ff31-466f-b2b9-e050cee8af67-typha-certs\") pod \"calico-typha-67ff6d8d64-knrhb\" (UID: \"e323b54b-ff31-466f-b2b9-e050cee8af67\") " pod="calico-system/calico-typha-67ff6d8d64-knrhb" Nov 8 00:36:59.495854 systemd[1]: Created slice kubepods-besteffort-pod3c399647_9062_4592_8ca1_37dfae67b8f4.slice - libcontainer container kubepods-besteffort-pod3c399647_9062_4592_8ca1_37dfae67b8f4.slice. Nov 8 00:36:59.579016 kubelet[2549]: I1108 00:36:59.578586 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-lib-modules\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579016 kubelet[2549]: I1108 00:36:59.578634 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-var-run-calico\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579016 kubelet[2549]: I1108 00:36:59.578655 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-cni-net-dir\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579016 kubelet[2549]: I1108 00:36:59.578671 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-flexvol-driver-host\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579016 kubelet[2549]: I1108 00:36:59.578688 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c399647-9062-4592-8ca1-37dfae67b8f4-tigera-ca-bundle\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579300 kubelet[2549]: I1108 00:36:59.578706 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-var-lib-calico\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579300 kubelet[2549]: I1108 00:36:59.578724 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-xtables-lock\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579300 kubelet[2549]: I1108 00:36:59.578742 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-cni-log-dir\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579300 kubelet[2549]: I1108 00:36:59.578758 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qllzt\" (UniqueName: \"kubernetes.io/projected/3c399647-9062-4592-8ca1-37dfae67b8f4-kube-api-access-qllzt\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579300 kubelet[2549]: I1108 00:36:59.578775 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-policysync\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579449 kubelet[2549]: I1108 00:36:59.578794 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c399647-9062-4592-8ca1-37dfae67b8f4-cni-bin-dir\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.579449 kubelet[2549]: I1108 00:36:59.578810 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c399647-9062-4592-8ca1-37dfae67b8f4-node-certs\") pod \"calico-node-q28fg\" (UID: \"3c399647-9062-4592-8ca1-37dfae67b8f4\") " pod="calico-system/calico-node-q28fg" Nov 8 00:36:59.626363 kubelet[2549]: E1108 00:36:59.625576 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:59.626468 containerd[1471]: time="2025-11-08T00:36:59.626127603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67ff6d8d64-knrhb,Uid:e323b54b-ff31-466f-b2b9-e050cee8af67,Namespace:calico-system,Attempt:0,}" Nov 8 00:36:59.655220 containerd[1471]: time="2025-11-08T00:36:59.654116045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:59.655220 containerd[1471]: time="2025-11-08T00:36:59.654178335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:59.655220 containerd[1471]: time="2025-11-08T00:36:59.654191665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:59.655220 containerd[1471]: time="2025-11-08T00:36:59.654264355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:59.687514 kubelet[2549]: E1108 00:36:59.685888 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.687514 kubelet[2549]: W1108 00:36:59.685912 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.687514 kubelet[2549]: E1108 00:36:59.685957 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.687514 kubelet[2549]: E1108 00:36:59.686590 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.687514 kubelet[2549]: W1108 00:36:59.686601 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.687514 kubelet[2549]: E1108 00:36:59.686611 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.687514 kubelet[2549]: E1108 00:36:59.687188 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.687514 kubelet[2549]: W1108 00:36:59.687197 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.687514 kubelet[2549]: E1108 00:36:59.687206 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.689082 kubelet[2549]: E1108 00:36:59.688289 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.689082 kubelet[2549]: W1108 00:36:59.688301 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.689082 kubelet[2549]: E1108 00:36:59.688433 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.692681 kubelet[2549]: E1108 00:36:59.691073 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.692681 kubelet[2549]: W1108 00:36:59.691086 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.692681 kubelet[2549]: E1108 00:36:59.691098 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.696367 kubelet[2549]: E1108 00:36:59.695387 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.696367 kubelet[2549]: W1108 00:36:59.695445 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.696367 kubelet[2549]: E1108 00:36:59.695459 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.700478 systemd[1]: Started cri-containerd-50dee120f8025003d152ebb8df549681547526389c29f8055e9435d639137992.scope - libcontainer container 50dee120f8025003d152ebb8df549681547526389c29f8055e9435d639137992. Nov 8 00:36:59.707900 kubelet[2549]: E1108 00:36:59.707658 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.707900 kubelet[2549]: W1108 00:36:59.707676 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.707900 kubelet[2549]: E1108 00:36:59.707691 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.712243 kubelet[2549]: E1108 00:36:59.712188 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:36:59.768891 containerd[1471]: time="2025-11-08T00:36:59.768849360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67ff6d8d64-knrhb,Uid:e323b54b-ff31-466f-b2b9-e050cee8af67,Namespace:calico-system,Attempt:0,} returns sandbox id \"50dee120f8025003d152ebb8df549681547526389c29f8055e9435d639137992\"" Nov 8 00:36:59.770053 kubelet[2549]: E1108 00:36:59.769773 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:59.772227 containerd[1471]: time="2025-11-08T00:36:59.772011342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:36:59.773355 kubelet[2549]: E1108 00:36:59.773307 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.773479 kubelet[2549]: W1108 00:36:59.773465 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.773592 kubelet[2549]: E1108 00:36:59.773531 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.774099 kubelet[2549]: E1108 00:36:59.773955 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.774099 kubelet[2549]: W1108 00:36:59.773968 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.774099 kubelet[2549]: E1108 00:36:59.773979 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.775133 kubelet[2549]: E1108 00:36:59.774619 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.775133 kubelet[2549]: W1108 00:36:59.774631 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.775133 kubelet[2549]: E1108 00:36:59.774641 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.775542 kubelet[2549]: E1108 00:36:59.775430 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.775542 kubelet[2549]: W1108 00:36:59.775441 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.775542 kubelet[2549]: E1108 00:36:59.775453 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.775784 kubelet[2549]: E1108 00:36:59.775689 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.775784 kubelet[2549]: W1108 00:36:59.775700 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.775784 kubelet[2549]: E1108 00:36:59.775708 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.776112 kubelet[2549]: E1108 00:36:59.776013 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.776112 kubelet[2549]: W1108 00:36:59.776024 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.776112 kubelet[2549]: E1108 00:36:59.776032 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.776458 kubelet[2549]: E1108 00:36:59.776364 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.776458 kubelet[2549]: W1108 00:36:59.776375 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.776458 kubelet[2549]: E1108 00:36:59.776383 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.776678 kubelet[2549]: E1108 00:36:59.776666 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.776813 kubelet[2549]: W1108 00:36:59.776721 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.776813 kubelet[2549]: E1108 00:36:59.776733 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.777000 kubelet[2549]: E1108 00:36:59.776989 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.777121 kubelet[2549]: W1108 00:36:59.777037 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.777121 kubelet[2549]: E1108 00:36:59.777048 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.777345 kubelet[2549]: E1108 00:36:59.777250 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.777345 kubelet[2549]: W1108 00:36:59.777261 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.777345 kubelet[2549]: E1108 00:36:59.777268 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.777589 kubelet[2549]: E1108 00:36:59.777578 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.777717 kubelet[2549]: W1108 00:36:59.777628 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.777717 kubelet[2549]: E1108 00:36:59.777639 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.777991 kubelet[2549]: E1108 00:36:59.777894 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.777991 kubelet[2549]: W1108 00:36:59.777903 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.777991 kubelet[2549]: E1108 00:36:59.777912 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.778188 kubelet[2549]: E1108 00:36:59.778177 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.778234 kubelet[2549]: W1108 00:36:59.778224 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.778383 kubelet[2549]: E1108 00:36:59.778270 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.778624 kubelet[2549]: E1108 00:36:59.778529 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.778624 kubelet[2549]: W1108 00:36:59.778539 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.778624 kubelet[2549]: E1108 00:36:59.778547 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.778849 kubelet[2549]: E1108 00:36:59.778751 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.778849 kubelet[2549]: W1108 00:36:59.778762 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.778849 kubelet[2549]: E1108 00:36:59.778774 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.778990 kubelet[2549]: E1108 00:36:59.778979 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.779038 kubelet[2549]: W1108 00:36:59.779029 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.779082 kubelet[2549]: E1108 00:36:59.779073 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.779371 kubelet[2549]: E1108 00:36:59.779360 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.779433 kubelet[2549]: W1108 00:36:59.779423 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.779571 kubelet[2549]: E1108 00:36:59.779469 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.779792 kubelet[2549]: E1108 00:36:59.779692 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.779792 kubelet[2549]: W1108 00:36:59.779703 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.779792 kubelet[2549]: E1108 00:36:59.779711 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.779925 kubelet[2549]: E1108 00:36:59.779914 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.779972 kubelet[2549]: W1108 00:36:59.779962 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.780016 kubelet[2549]: E1108 00:36:59.780007 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.780251 kubelet[2549]: E1108 00:36:59.780240 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.780304 kubelet[2549]: W1108 00:36:59.780294 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.780456 kubelet[2549]: E1108 00:36:59.780381 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.781484 kubelet[2549]: E1108 00:36:59.781468 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.781539 kubelet[2549]: W1108 00:36:59.781485 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.781539 kubelet[2549]: E1108 00:36:59.781500 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.781591 kubelet[2549]: I1108 00:36:59.781536 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5ed425e-ae3a-4fee-9b79-13f79eee03b3-kubelet-dir\") pod \"csi-node-driver-w66pk\" (UID: \"e5ed425e-ae3a-4fee-9b79-13f79eee03b3\") " pod="calico-system/csi-node-driver-w66pk" Nov 8 00:36:59.781814 kubelet[2549]: E1108 00:36:59.781778 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.781814 kubelet[2549]: W1108 00:36:59.781812 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.781919 kubelet[2549]: E1108 00:36:59.781821 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.781919 kubelet[2549]: I1108 00:36:59.781839 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcg49\" (UniqueName: \"kubernetes.io/projected/e5ed425e-ae3a-4fee-9b79-13f79eee03b3-kube-api-access-hcg49\") pod \"csi-node-driver-w66pk\" (UID: \"e5ed425e-ae3a-4fee-9b79-13f79eee03b3\") " pod="calico-system/csi-node-driver-w66pk" Nov 8 00:36:59.782192 kubelet[2549]: E1108 00:36:59.782177 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.782192 kubelet[2549]: W1108 00:36:59.782191 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.782248 kubelet[2549]: E1108 00:36:59.782199 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.782248 kubelet[2549]: I1108 00:36:59.782216 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e5ed425e-ae3a-4fee-9b79-13f79eee03b3-registration-dir\") pod \"csi-node-driver-w66pk\" (UID: \"e5ed425e-ae3a-4fee-9b79-13f79eee03b3\") " pod="calico-system/csi-node-driver-w66pk" Nov 8 00:36:59.782518 kubelet[2549]: E1108 00:36:59.782505 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.782518 kubelet[2549]: W1108 00:36:59.782517 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.782575 kubelet[2549]: E1108 00:36:59.782525 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.782575 kubelet[2549]: I1108 00:36:59.782541 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e5ed425e-ae3a-4fee-9b79-13f79eee03b3-varrun\") pod \"csi-node-driver-w66pk\" (UID: \"e5ed425e-ae3a-4fee-9b79-13f79eee03b3\") " pod="calico-system/csi-node-driver-w66pk" Nov 8 00:36:59.782817 kubelet[2549]: E1108 00:36:59.782795 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.782857 kubelet[2549]: W1108 00:36:59.782819 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.782857 kubelet[2549]: E1108 00:36:59.782831 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.782914 kubelet[2549]: I1108 00:36:59.782855 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e5ed425e-ae3a-4fee-9b79-13f79eee03b3-socket-dir\") pod \"csi-node-driver-w66pk\" (UID: \"e5ed425e-ae3a-4fee-9b79-13f79eee03b3\") " pod="calico-system/csi-node-driver-w66pk" Nov 8 00:36:59.783141 kubelet[2549]: E1108 00:36:59.783123 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.783190 kubelet[2549]: W1108 00:36:59.783140 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.783190 kubelet[2549]: E1108 00:36:59.783153 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.783447 kubelet[2549]: E1108 00:36:59.783431 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.783480 kubelet[2549]: W1108 00:36:59.783445 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.783480 kubelet[2549]: E1108 00:36:59.783459 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.783759 kubelet[2549]: E1108 00:36:59.783743 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.783759 kubelet[2549]: W1108 00:36:59.783757 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.783847 kubelet[2549]: E1108 00:36:59.783769 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.784063 kubelet[2549]: E1108 00:36:59.784036 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.784063 kubelet[2549]: W1108 00:36:59.784062 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.784155 kubelet[2549]: E1108 00:36:59.784075 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.784470 kubelet[2549]: E1108 00:36:59.784453 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.784470 kubelet[2549]: W1108 00:36:59.784469 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.784555 kubelet[2549]: E1108 00:36:59.784482 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.784769 kubelet[2549]: E1108 00:36:59.784752 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.784769 kubelet[2549]: W1108 00:36:59.784767 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.784855 kubelet[2549]: E1108 00:36:59.784779 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.785060 kubelet[2549]: E1108 00:36:59.785047 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.785060 kubelet[2549]: W1108 00:36:59.785057 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.785130 kubelet[2549]: E1108 00:36:59.785066 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.785289 kubelet[2549]: E1108 00:36:59.785277 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.785289 kubelet[2549]: W1108 00:36:59.785288 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.785394 kubelet[2549]: E1108 00:36:59.785296 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.785550 kubelet[2549]: E1108 00:36:59.785531 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.785550 kubelet[2549]: W1108 00:36:59.785546 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.785597 kubelet[2549]: E1108 00:36:59.785555 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.785762 kubelet[2549]: E1108 00:36:59.785749 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.785762 kubelet[2549]: W1108 00:36:59.785760 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.785822 kubelet[2549]: E1108 00:36:59.785767 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.805961 kubelet[2549]: E1108 00:36:59.805922 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:36:59.806511 containerd[1471]: time="2025-11-08T00:36:59.806470359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q28fg,Uid:3c399647-9062-4592-8ca1-37dfae67b8f4,Namespace:calico-system,Attempt:0,}" Nov 8 00:36:59.839602 containerd[1471]: time="2025-11-08T00:36:59.837301425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:36:59.839602 containerd[1471]: time="2025-11-08T00:36:59.837379505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:36:59.839602 containerd[1471]: time="2025-11-08T00:36:59.837403644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:59.839602 containerd[1471]: time="2025-11-08T00:36:59.837488874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:36:59.874458 systemd[1]: Started cri-containerd-acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd.scope - libcontainer container acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd. Nov 8 00:36:59.883890 kubelet[2549]: E1108 00:36:59.883856 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.883890 kubelet[2549]: W1108 00:36:59.883880 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.884011 kubelet[2549]: E1108 00:36:59.883899 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.884210 kubelet[2549]: E1108 00:36:59.884198 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.884248 kubelet[2549]: W1108 00:36:59.884212 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.884248 kubelet[2549]: E1108 00:36:59.884221 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.884490 kubelet[2549]: E1108 00:36:59.884470 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.884490 kubelet[2549]: W1108 00:36:59.884482 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.884553 kubelet[2549]: E1108 00:36:59.884505 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.884735 kubelet[2549]: E1108 00:36:59.884723 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.884735 kubelet[2549]: W1108 00:36:59.884734 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.884791 kubelet[2549]: E1108 00:36:59.884742 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.885006 kubelet[2549]: E1108 00:36:59.884993 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.885006 kubelet[2549]: W1108 00:36:59.885005 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.885066 kubelet[2549]: E1108 00:36:59.885025 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.885284 kubelet[2549]: E1108 00:36:59.885271 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.885284 kubelet[2549]: W1108 00:36:59.885282 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.885371 kubelet[2549]: E1108 00:36:59.885291 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.885577 kubelet[2549]: E1108 00:36:59.885563 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.885577 kubelet[2549]: W1108 00:36:59.885574 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.885634 kubelet[2549]: E1108 00:36:59.885583 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.885823 kubelet[2549]: E1108 00:36:59.885812 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.885823 kubelet[2549]: W1108 00:36:59.885822 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.885872 kubelet[2549]: E1108 00:36:59.885857 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.886159 kubelet[2549]: E1108 00:36:59.886145 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.886159 kubelet[2549]: W1108 00:36:59.886157 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.886212 kubelet[2549]: E1108 00:36:59.886183 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.886477 kubelet[2549]: E1108 00:36:59.886465 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.886477 kubelet[2549]: W1108 00:36:59.886476 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.886525 kubelet[2549]: E1108 00:36:59.886484 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.886746 kubelet[2549]: E1108 00:36:59.886733 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.886746 kubelet[2549]: W1108 00:36:59.886744 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.886803 kubelet[2549]: E1108 00:36:59.886753 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.886996 kubelet[2549]: E1108 00:36:59.886982 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.886996 kubelet[2549]: W1108 00:36:59.886994 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.887049 kubelet[2549]: E1108 00:36:59.887002 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.887243 kubelet[2549]: E1108 00:36:59.887231 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.887243 kubelet[2549]: W1108 00:36:59.887242 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.887297 kubelet[2549]: E1108 00:36:59.887252 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.887654 kubelet[2549]: E1108 00:36:59.887640 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.887654 kubelet[2549]: W1108 00:36:59.887652 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.887712 kubelet[2549]: E1108 00:36:59.887660 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.887910 kubelet[2549]: E1108 00:36:59.887897 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.887910 kubelet[2549]: W1108 00:36:59.887908 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.887979 kubelet[2549]: E1108 00:36:59.887916 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.888161 kubelet[2549]: E1108 00:36:59.888148 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.888161 kubelet[2549]: W1108 00:36:59.888160 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.888204 kubelet[2549]: E1108 00:36:59.888168 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.888650 kubelet[2549]: E1108 00:36:59.888635 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.888650 kubelet[2549]: W1108 00:36:59.888648 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.888711 kubelet[2549]: E1108 00:36:59.888660 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.889259 kubelet[2549]: E1108 00:36:59.889237 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.889259 kubelet[2549]: W1108 00:36:59.889256 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.889311 kubelet[2549]: E1108 00:36:59.889268 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.890377 kubelet[2549]: E1108 00:36:59.889652 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.890377 kubelet[2549]: W1108 00:36:59.889661 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.890377 kubelet[2549]: E1108 00:36:59.889669 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.890377 kubelet[2549]: E1108 00:36:59.889928 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.890377 kubelet[2549]: W1108 00:36:59.889936 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.890377 kubelet[2549]: E1108 00:36:59.889944 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.890377 kubelet[2549]: E1108 00:36:59.890209 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.890377 kubelet[2549]: W1108 00:36:59.890216 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.890377 kubelet[2549]: E1108 00:36:59.890224 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.890804 kubelet[2549]: E1108 00:36:59.890784 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.890804 kubelet[2549]: W1108 00:36:59.890799 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.890844 kubelet[2549]: E1108 00:36:59.890807 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.891506 kubelet[2549]: E1108 00:36:59.891431 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.891506 kubelet[2549]: W1108 00:36:59.891443 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.891506 kubelet[2549]: E1108 00:36:59.891452 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.891881 kubelet[2549]: E1108 00:36:59.891858 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.891881 kubelet[2549]: W1108 00:36:59.891873 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.891933 kubelet[2549]: E1108 00:36:59.891882 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.893250 kubelet[2549]: E1108 00:36:59.893230 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.893301 kubelet[2549]: W1108 00:36:59.893245 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.893301 kubelet[2549]: E1108 00:36:59.893266 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.906339 kubelet[2549]: E1108 00:36:59.905421 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:36:59.906339 kubelet[2549]: W1108 00:36:59.905440 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:36:59.906339 kubelet[2549]: E1108 00:36:59.905478 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:36:59.910682 containerd[1471]: time="2025-11-08T00:36:59.910651333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q28fg,Uid:3c399647-9062-4592-8ca1-37dfae67b8f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd\"" Nov 8 00:36:59.911698 kubelet[2549]: E1108 00:36:59.911673 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:00.561927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341273757.mount: Deactivated successfully. Nov 8 00:37:00.971636 kubelet[2549]: E1108 00:37:00.970527 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:01.039529 containerd[1471]: time="2025-11-08T00:37:01.039480722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:01.040387 containerd[1471]: time="2025-11-08T00:37:01.040199758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:37:01.041943 containerd[1471]: time="2025-11-08T00:37:01.040862975Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:01.043261 containerd[1471]: time="2025-11-08T00:37:01.042502688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:01.043261 containerd[1471]: time="2025-11-08T00:37:01.043152215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.271113253s" Nov 8 00:37:01.043261 containerd[1471]: time="2025-11-08T00:37:01.043182794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:37:01.045132 containerd[1471]: time="2025-11-08T00:37:01.045099975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:37:01.062686 containerd[1471]: time="2025-11-08T00:37:01.062405203Z" level=info msg="CreateContainer within sandbox \"50dee120f8025003d152ebb8df549681547526389c29f8055e9435d639137992\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:37:01.070175 containerd[1471]: time="2025-11-08T00:37:01.070143036Z" level=info msg="CreateContainer within sandbox \"50dee120f8025003d152ebb8df549681547526389c29f8055e9435d639137992\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"527e9376065df242683f1ca54f72e6e9a615b04f3ef71703b36bbc447c3d990f\"" Nov 8 00:37:01.070631 containerd[1471]: time="2025-11-08T00:37:01.070600734Z" level=info msg="StartContainer for \"527e9376065df242683f1ca54f72e6e9a615b04f3ef71703b36bbc447c3d990f\"" Nov 8 00:37:01.105434 systemd[1]: Started cri-containerd-527e9376065df242683f1ca54f72e6e9a615b04f3ef71703b36bbc447c3d990f.scope - libcontainer container 527e9376065df242683f1ca54f72e6e9a615b04f3ef71703b36bbc447c3d990f. Nov 8 00:37:01.152508 containerd[1471]: time="2025-11-08T00:37:01.152393135Z" level=info msg="StartContainer for \"527e9376065df242683f1ca54f72e6e9a615b04f3ef71703b36bbc447c3d990f\" returns successfully" Nov 8 00:37:01.692230 containerd[1471]: time="2025-11-08T00:37:01.692169864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:01.692965 containerd[1471]: time="2025-11-08T00:37:01.692927500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:37:01.693636 containerd[1471]: time="2025-11-08T00:37:01.693556357Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:01.695240 containerd[1471]: time="2025-11-08T00:37:01.695185340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:01.695895 containerd[1471]: time="2025-11-08T00:37:01.695855607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 650.722932ms" Nov 8 00:37:01.695939 containerd[1471]: time="2025-11-08T00:37:01.695894306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:37:01.700668 containerd[1471]: time="2025-11-08T00:37:01.700526604Z" level=info msg="CreateContainer within sandbox \"acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:37:01.713632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457719751.mount: Deactivated successfully. Nov 8 00:37:01.716254 containerd[1471]: time="2025-11-08T00:37:01.716227389Z" level=info msg="CreateContainer within sandbox \"acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1\"" Nov 8 00:37:01.716915 containerd[1471]: time="2025-11-08T00:37:01.716875176Z" level=info msg="StartContainer for \"e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1\"" Nov 8 00:37:01.750495 systemd[1]: Started cri-containerd-e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1.scope - libcontainer container e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1. Nov 8 00:37:01.781612 containerd[1471]: time="2025-11-08T00:37:01.781535498Z" level=info msg="StartContainer for \"e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1\" returns successfully" Nov 8 00:37:01.796655 systemd[1]: cri-containerd-e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1.scope: Deactivated successfully. Nov 8 00:37:01.887027 containerd[1471]: time="2025-11-08T00:37:01.886388659Z" level=info msg="shim disconnected" id=e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1 namespace=k8s.io Nov 8 00:37:01.887027 containerd[1471]: time="2025-11-08T00:37:01.886949526Z" level=warning msg="cleaning up after shim disconnected" id=e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1 namespace=k8s.io Nov 8 00:37:01.887027 containerd[1471]: time="2025-11-08T00:37:01.887026906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:37:02.037054 kubelet[2549]: E1108 00:37:02.036922 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:02.038572 containerd[1471]: time="2025-11-08T00:37:02.038500610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:37:02.040534 kubelet[2549]: E1108 00:37:02.040188 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:02.072924 kubelet[2549]: I1108 00:37:02.072827 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67ff6d8d64-knrhb" podStartSLOduration=1.800005136 podStartE2EDuration="3.072813759s" podCreationTimestamp="2025-11-08 00:36:59 +0000 UTC" firstStartedPulling="2025-11-08 00:36:59.771388496 +0000 UTC m=+19.893664547" lastFinishedPulling="2025-11-08 00:37:01.044197119 +0000 UTC m=+21.166473170" observedRunningTime="2025-11-08 00:37:02.07264882 +0000 UTC m=+22.194924871" watchObservedRunningTime="2025-11-08 00:37:02.072813759 +0000 UTC m=+22.195089810" Nov 8 00:37:02.485124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e98d1f2f8d939ee233ee5c64f654e1554c5b80c80903c88daad6ea887f1f71b1-rootfs.mount: Deactivated successfully. Nov 8 00:37:02.971552 kubelet[2549]: E1108 00:37:02.970907 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:03.040640 kubelet[2549]: I1108 00:37:03.040612 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:37:03.041670 kubelet[2549]: E1108 00:37:03.040940 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:03.849147 containerd[1471]: time="2025-11-08T00:37:03.849103545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:03.851271 containerd[1471]: time="2025-11-08T00:37:03.850238601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:37:03.851478 containerd[1471]: time="2025-11-08T00:37:03.851454846Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:03.854550 containerd[1471]: time="2025-11-08T00:37:03.854521603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:03.855522 containerd[1471]: time="2025-11-08T00:37:03.855399641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.816866342s" Nov 8 00:37:03.855522 containerd[1471]: time="2025-11-08T00:37:03.855443750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:37:03.865408 containerd[1471]: time="2025-11-08T00:37:03.865292151Z" level=info msg="CreateContainer within sandbox \"acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:37:03.883340 containerd[1471]: time="2025-11-08T00:37:03.882702501Z" level=info msg="CreateContainer within sandbox \"acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b\"" Nov 8 00:37:03.884628 containerd[1471]: time="2025-11-08T00:37:03.884608824Z" level=info msg="StartContainer for \"eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b\"" Nov 8 00:37:03.927891 systemd[1]: Started cri-containerd-eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b.scope - libcontainer container eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b. Nov 8 00:37:03.961749 containerd[1471]: time="2025-11-08T00:37:03.961716075Z" level=info msg="StartContainer for \"eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b\" returns successfully" Nov 8 00:37:04.047748 kubelet[2549]: E1108 00:37:04.047714 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:04.488693 systemd[1]: cri-containerd-eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b.scope: Deactivated successfully. Nov 8 00:37:04.512738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b-rootfs.mount: Deactivated successfully. Nov 8 00:37:04.555092 containerd[1471]: time="2025-11-08T00:37:04.555026999Z" level=info msg="shim disconnected" id=eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b namespace=k8s.io Nov 8 00:37:04.555092 containerd[1471]: time="2025-11-08T00:37:04.555073219Z" level=warning msg="cleaning up after shim disconnected" id=eb44dbf2daccdfa955aa797b847a2d93a53a9d1c4ca5570235fe5cfe391a5a9b namespace=k8s.io Nov 8 00:37:04.555092 containerd[1471]: time="2025-11-08T00:37:04.555083289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:37:04.590928 kubelet[2549]: I1108 00:37:04.590290 2549 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:37:04.631163 systemd[1]: Created slice kubepods-besteffort-pod0470eb38_6ef8_454a_af84_3dd964fdf6b9.slice - libcontainer container kubepods-besteffort-pod0470eb38_6ef8_454a_af84_3dd964fdf6b9.slice. Nov 8 00:37:04.643244 systemd[1]: Created slice kubepods-burstable-pod8f5c159c_1100_4d8d_b4a2_0811154f10ae.slice - libcontainer container kubepods-burstable-pod8f5c159c_1100_4d8d_b4a2_0811154f10ae.slice. Nov 8 00:37:04.653050 systemd[1]: Created slice kubepods-besteffort-pod5e48afab_b056_4d85_9cc7_4c4bf819b790.slice - libcontainer container kubepods-besteffort-pod5e48afab_b056_4d85_9cc7_4c4bf819b790.slice. Nov 8 00:37:04.661244 systemd[1]: Created slice kubepods-besteffort-podb339edb0_297f_4caa_90a2_1e5e9c9f0583.slice - libcontainer container kubepods-besteffort-podb339edb0_297f_4caa_90a2_1e5e9c9f0583.slice. Nov 8 00:37:04.668599 systemd[1]: Created slice kubepods-besteffort-pod9f16a8b2_c22c_42c4_a0b9_731351a537c7.slice - libcontainer container kubepods-besteffort-pod9f16a8b2_c22c_42c4_a0b9_731351a537c7.slice. Nov 8 00:37:04.678489 systemd[1]: Created slice kubepods-besteffort-podb7ec371f_050b_4208_a3ac_8f708d9ed8b9.slice - libcontainer container kubepods-besteffort-podb7ec371f_050b_4208_a3ac_8f708d9ed8b9.slice. Nov 8 00:37:04.687116 systemd[1]: Created slice kubepods-burstable-pod50e2147d_0531_46fa_b3e7_3b3b05f008fd.slice - libcontainer container kubepods-burstable-pod50e2147d_0531_46fa_b3e7_3b3b05f008fd.slice. Nov 8 00:37:04.724543 kubelet[2549]: I1108 00:37:04.724520 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e48afab-b056-4d85-9cc7-4c4bf819b790-calico-apiserver-certs\") pod \"calico-apiserver-5cc866c96c-r4b4p\" (UID: \"5e48afab-b056-4d85-9cc7-4c4bf819b790\") " pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" Nov 8 00:37:04.724956 kubelet[2549]: I1108 00:37:04.724905 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f5c159c-1100-4d8d-b4a2-0811154f10ae-config-volume\") pod \"coredns-674b8bbfcf-gsg4q\" (UID: \"8f5c159c-1100-4d8d-b4a2-0811154f10ae\") " pod="kube-system/coredns-674b8bbfcf-gsg4q" Nov 8 00:37:04.724956 kubelet[2549]: I1108 00:37:04.724951 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-ca-bundle\") pod \"whisker-7579885fb4-6qtb5\" (UID: \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\") " pod="calico-system/whisker-7579885fb4-6qtb5" Nov 8 00:37:04.725048 kubelet[2549]: I1108 00:37:04.724974 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8gt\" (UniqueName: \"kubernetes.io/projected/0470eb38-6ef8-454a-af84-3dd964fdf6b9-kube-api-access-hk8gt\") pod \"whisker-7579885fb4-6qtb5\" (UID: \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\") " pod="calico-system/whisker-7579885fb4-6qtb5" Nov 8 00:37:04.725048 kubelet[2549]: I1108 00:37:04.724994 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmbp\" (UniqueName: \"kubernetes.io/projected/8f5c159c-1100-4d8d-b4a2-0811154f10ae-kube-api-access-msmbp\") pod \"coredns-674b8bbfcf-gsg4q\" (UID: \"8f5c159c-1100-4d8d-b4a2-0811154f10ae\") " pod="kube-system/coredns-674b8bbfcf-gsg4q" Nov 8 00:37:04.725048 kubelet[2549]: I1108 00:37:04.725013 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-backend-key-pair\") pod \"whisker-7579885fb4-6qtb5\" (UID: \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\") " pod="calico-system/whisker-7579885fb4-6qtb5" Nov 8 00:37:04.725048 kubelet[2549]: I1108 00:37:04.725032 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jf2p\" (UniqueName: \"kubernetes.io/projected/5e48afab-b056-4d85-9cc7-4c4bf819b790-kube-api-access-2jf2p\") pod \"calico-apiserver-5cc866c96c-r4b4p\" (UID: \"5e48afab-b056-4d85-9cc7-4c4bf819b790\") " pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" Nov 8 00:37:04.826230 kubelet[2549]: I1108 00:37:04.826103 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pph6q\" (UniqueName: \"kubernetes.io/projected/9f16a8b2-c22c-42c4-a0b9-731351a537c7-kube-api-access-pph6q\") pod \"calico-kube-controllers-769c89c5c9-znhjq\" (UID: \"9f16a8b2-c22c-42c4-a0b9-731351a537c7\") " pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" Nov 8 00:37:04.826230 kubelet[2549]: I1108 00:37:04.826197 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e2147d-0531-46fa-b3e7-3b3b05f008fd-config-volume\") pod \"coredns-674b8bbfcf-fhv8n\" (UID: \"50e2147d-0531-46fa-b3e7-3b3b05f008fd\") " pod="kube-system/coredns-674b8bbfcf-fhv8n" Nov 8 00:37:04.826230 kubelet[2549]: I1108 00:37:04.826216 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btn6d\" (UniqueName: \"kubernetes.io/projected/50e2147d-0531-46fa-b3e7-3b3b05f008fd-kube-api-access-btn6d\") pod \"coredns-674b8bbfcf-fhv8n\" (UID: \"50e2147d-0531-46fa-b3e7-3b3b05f008fd\") " pod="kube-system/coredns-674b8bbfcf-fhv8n" Nov 8 00:37:04.826346 kubelet[2549]: I1108 00:37:04.826237 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdpp\" (UniqueName: \"kubernetes.io/projected/b339edb0-297f-4caa-90a2-1e5e9c9f0583-kube-api-access-kcdpp\") pod \"goldmane-666569f655-x5vbk\" (UID: \"b339edb0-297f-4caa-90a2-1e5e9c9f0583\") " pod="calico-system/goldmane-666569f655-x5vbk" Nov 8 00:37:04.826346 kubelet[2549]: I1108 00:37:04.826293 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b339edb0-297f-4caa-90a2-1e5e9c9f0583-config\") pod \"goldmane-666569f655-x5vbk\" (UID: \"b339edb0-297f-4caa-90a2-1e5e9c9f0583\") " pod="calico-system/goldmane-666569f655-x5vbk" Nov 8 00:37:04.826346 kubelet[2549]: I1108 00:37:04.826314 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b339edb0-297f-4caa-90a2-1e5e9c9f0583-goldmane-ca-bundle\") pod \"goldmane-666569f655-x5vbk\" (UID: \"b339edb0-297f-4caa-90a2-1e5e9c9f0583\") " pod="calico-system/goldmane-666569f655-x5vbk" Nov 8 00:37:04.826420 kubelet[2549]: I1108 00:37:04.826380 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b339edb0-297f-4caa-90a2-1e5e9c9f0583-goldmane-key-pair\") pod \"goldmane-666569f655-x5vbk\" (UID: \"b339edb0-297f-4caa-90a2-1e5e9c9f0583\") " pod="calico-system/goldmane-666569f655-x5vbk" Nov 8 00:37:04.826447 kubelet[2549]: I1108 00:37:04.826408 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7ec371f-050b-4208-a3ac-8f708d9ed8b9-calico-apiserver-certs\") pod \"calico-apiserver-5cc866c96c-jlfgk\" (UID: \"b7ec371f-050b-4208-a3ac-8f708d9ed8b9\") " pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" Nov 8 00:37:04.826447 kubelet[2549]: I1108 00:37:04.826438 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j488h\" (UniqueName: \"kubernetes.io/projected/b7ec371f-050b-4208-a3ac-8f708d9ed8b9-kube-api-access-j488h\") pod \"calico-apiserver-5cc866c96c-jlfgk\" (UID: \"b7ec371f-050b-4208-a3ac-8f708d9ed8b9\") " pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" Nov 8 00:37:04.826498 kubelet[2549]: I1108 00:37:04.826457 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f16a8b2-c22c-42c4-a0b9-731351a537c7-tigera-ca-bundle\") pod \"calico-kube-controllers-769c89c5c9-znhjq\" (UID: \"9f16a8b2-c22c-42c4-a0b9-731351a537c7\") " pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" Nov 8 00:37:04.950698 kubelet[2549]: E1108 00:37:04.949848 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:04.958763 containerd[1471]: time="2025-11-08T00:37:04.958073133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7579885fb4-6qtb5,Uid:0470eb38-6ef8-454a-af84-3dd964fdf6b9,Namespace:calico-system,Attempt:0,}" Nov 8 00:37:04.962987 containerd[1471]: time="2025-11-08T00:37:04.962755506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gsg4q,Uid:8f5c159c-1100-4d8d-b4a2-0811154f10ae,Namespace:kube-system,Attempt:0,}" Nov 8 00:37:04.963081 containerd[1471]: time="2025-11-08T00:37:04.963038995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-r4b4p,Uid:5e48afab-b056-4d85-9cc7-4c4bf819b790,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:37:04.988190 containerd[1471]: time="2025-11-08T00:37:04.988107294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-jlfgk,Uid:b7ec371f-050b-4208-a3ac-8f708d9ed8b9,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:37:04.989842 kubelet[2549]: E1108 00:37:04.989611 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:04.992129 containerd[1471]: time="2025-11-08T00:37:04.991933169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhv8n,Uid:50e2147d-0531-46fa-b3e7-3b3b05f008fd,Namespace:kube-system,Attempt:0,}" Nov 8 00:37:04.994126 systemd[1]: Created slice kubepods-besteffort-pode5ed425e_ae3a_4fee_9b79_13f79eee03b3.slice - libcontainer container kubepods-besteffort-pode5ed425e_ae3a_4fee_9b79_13f79eee03b3.slice. Nov 8 00:37:05.002978 containerd[1471]: time="2025-11-08T00:37:05.002825550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66pk,Uid:e5ed425e-ae3a-4fee-9b79-13f79eee03b3,Namespace:calico-system,Attempt:0,}" Nov 8 00:37:05.070780 kubelet[2549]: E1108 00:37:05.070508 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:05.073368 containerd[1471]: time="2025-11-08T00:37:05.073146075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:37:05.222028 containerd[1471]: time="2025-11-08T00:37:05.221867189Z" level=error msg="Failed to destroy network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.222361 containerd[1471]: time="2025-11-08T00:37:05.222256238Z" level=error msg="encountered an error cleaning up failed sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.222402 containerd[1471]: time="2025-11-08T00:37:05.222307008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-r4b4p,Uid:5e48afab-b056-4d85-9cc7-4c4bf819b790,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.222692 kubelet[2549]: E1108 00:37:05.222601 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.222692 kubelet[2549]: E1108 00:37:05.222684 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" Nov 8 00:37:05.222760 kubelet[2549]: E1108 00:37:05.222708 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" Nov 8 00:37:05.222786 kubelet[2549]: E1108 00:37:05.222763 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc866c96c-r4b4p_calico-apiserver(5e48afab-b056-4d85-9cc7-4c4bf819b790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc866c96c-r4b4p_calico-apiserver(5e48afab-b056-4d85-9cc7-4c4bf819b790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:05.231346 containerd[1471]: time="2025-11-08T00:37:05.231204978Z" level=error msg="Failed to destroy network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.236439 containerd[1471]: time="2025-11-08T00:37:05.236336501Z" level=error msg="encountered an error cleaning up failed sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.236439 containerd[1471]: time="2025-11-08T00:37:05.236392531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gsg4q,Uid:8f5c159c-1100-4d8d-b4a2-0811154f10ae,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.236903 kubelet[2549]: E1108 00:37:05.236734 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.236903 kubelet[2549]: E1108 00:37:05.236780 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gsg4q" Nov 8 00:37:05.236903 kubelet[2549]: E1108 00:37:05.236800 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gsg4q" Nov 8 00:37:05.236995 kubelet[2549]: E1108 00:37:05.236843 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gsg4q_kube-system(8f5c159c-1100-4d8d-b4a2-0811154f10ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gsg4q_kube-system(8f5c159c-1100-4d8d-b4a2-0811154f10ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gsg4q" podUID="8f5c159c-1100-4d8d-b4a2-0811154f10ae" Nov 8 00:37:05.241469 containerd[1471]: time="2025-11-08T00:37:05.241436414Z" level=error msg="Failed to destroy network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.242187 containerd[1471]: time="2025-11-08T00:37:05.242044512Z" level=error msg="encountered an error cleaning up failed sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.242187 containerd[1471]: time="2025-11-08T00:37:05.242088892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66pk,Uid:e5ed425e-ae3a-4fee-9b79-13f79eee03b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.242293 kubelet[2549]: E1108 00:37:05.242260 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.242359 kubelet[2549]: E1108 00:37:05.242304 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w66pk" Nov 8 00:37:05.242359 kubelet[2549]: E1108 00:37:05.242341 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w66pk" Nov 8 00:37:05.242540 kubelet[2549]: E1108 00:37:05.242387 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:05.247377 containerd[1471]: time="2025-11-08T00:37:05.247302665Z" level=error msg="Failed to destroy network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.248419 containerd[1471]: time="2025-11-08T00:37:05.248303971Z" level=error msg="encountered an error cleaning up failed sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.248574 containerd[1471]: time="2025-11-08T00:37:05.248515100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhv8n,Uid:50e2147d-0531-46fa-b3e7-3b3b05f008fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.249081 kubelet[2549]: E1108 00:37:05.248968 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.249081 kubelet[2549]: E1108 00:37:05.249035 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fhv8n" Nov 8 00:37:05.249538 kubelet[2549]: E1108 00:37:05.249163 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fhv8n" Nov 8 00:37:05.249538 kubelet[2549]: E1108 00:37:05.249233 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fhv8n_kube-system(50e2147d-0531-46fa-b3e7-3b3b05f008fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fhv8n_kube-system(50e2147d-0531-46fa-b3e7-3b3b05f008fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fhv8n" podUID="50e2147d-0531-46fa-b3e7-3b3b05f008fd" Nov 8 00:37:05.249876 containerd[1471]: time="2025-11-08T00:37:05.249762656Z" level=error msg="Failed to destroy network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.250897 containerd[1471]: time="2025-11-08T00:37:05.250875762Z" level=error msg="encountered an error cleaning up failed sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.251034 containerd[1471]: time="2025-11-08T00:37:05.250992632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7579885fb4-6qtb5,Uid:0470eb38-6ef8-454a-af84-3dd964fdf6b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.251413 kubelet[2549]: E1108 00:37:05.251281 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.251413 kubelet[2549]: E1108 00:37:05.251349 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7579885fb4-6qtb5" Nov 8 00:37:05.251413 kubelet[2549]: E1108 00:37:05.251368 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7579885fb4-6qtb5" Nov 8 00:37:05.251505 kubelet[2549]: E1108 00:37:05.251408 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7579885fb4-6qtb5_calico-system(0470eb38-6ef8-454a-af84-3dd964fdf6b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7579885fb4-6qtb5_calico-system(0470eb38-6ef8-454a-af84-3dd964fdf6b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7579885fb4-6qtb5" podUID="0470eb38-6ef8-454a-af84-3dd964fdf6b9" Nov 8 00:37:05.256765 containerd[1471]: time="2025-11-08T00:37:05.256716423Z" level=error msg="Failed to destroy network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.257077 containerd[1471]: time="2025-11-08T00:37:05.257035302Z" level=error msg="encountered an error cleaning up failed sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.257125 containerd[1471]: time="2025-11-08T00:37:05.257090662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-jlfgk,Uid:b7ec371f-050b-4208-a3ac-8f708d9ed8b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.257259 kubelet[2549]: E1108 00:37:05.257220 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.257259 kubelet[2549]: E1108 00:37:05.257257 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" Nov 8 00:37:05.257392 kubelet[2549]: E1108 00:37:05.257272 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" Nov 8 00:37:05.257959 kubelet[2549]: E1108 00:37:05.257510 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc866c96c-jlfgk_calico-apiserver(b7ec371f-050b-4208-a3ac-8f708d9ed8b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc866c96c-jlfgk_calico-apiserver(b7ec371f-050b-4208-a3ac-8f708d9ed8b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:05.264896 containerd[1471]: time="2025-11-08T00:37:05.264842786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x5vbk,Uid:b339edb0-297f-4caa-90a2-1e5e9c9f0583,Namespace:calico-system,Attempt:0,}" Nov 8 00:37:05.290886 containerd[1471]: time="2025-11-08T00:37:05.290698350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769c89c5c9-znhjq,Uid:9f16a8b2-c22c-42c4-a0b9-731351a537c7,Namespace:calico-system,Attempt:0,}" Nov 8 00:37:05.334680 containerd[1471]: time="2025-11-08T00:37:05.334631383Z" level=error msg="Failed to destroy network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.335296 containerd[1471]: time="2025-11-08T00:37:05.335273310Z" level=error msg="encountered an error cleaning up failed sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.335530 containerd[1471]: time="2025-11-08T00:37:05.335433510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x5vbk,Uid:b339edb0-297f-4caa-90a2-1e5e9c9f0583,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.336844 kubelet[2549]: E1108 00:37:05.335907 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.336844 kubelet[2549]: E1108 00:37:05.335995 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x5vbk" Nov 8 00:37:05.336844 kubelet[2549]: E1108 00:37:05.336036 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x5vbk" Nov 8 00:37:05.336954 kubelet[2549]: E1108 00:37:05.336749 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x5vbk_calico-system(b339edb0-297f-4caa-90a2-1e5e9c9f0583)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x5vbk_calico-system(b339edb0-297f-4caa-90a2-1e5e9c9f0583)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:37:05.361431 containerd[1471]: time="2025-11-08T00:37:05.361400444Z" level=error msg="Failed to destroy network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.361913 containerd[1471]: time="2025-11-08T00:37:05.361875941Z" level=error msg="encountered an error cleaning up failed sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.361947 containerd[1471]: time="2025-11-08T00:37:05.361916611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769c89c5c9-znhjq,Uid:9f16a8b2-c22c-42c4-a0b9-731351a537c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.362086 kubelet[2549]: E1108 00:37:05.362057 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:05.362140 kubelet[2549]: E1108 00:37:05.362098 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" Nov 8 00:37:05.362140 kubelet[2549]: E1108 00:37:05.362132 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" Nov 8 00:37:05.362307 kubelet[2549]: E1108 00:37:05.362174 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-769c89c5c9-znhjq_calico-system(9f16a8b2-c22c-42c4-a0b9-731351a537c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-769c89c5c9-znhjq_calico-system(9f16a8b2-c22c-42c4-a0b9-731351a537c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:37:06.077656 kubelet[2549]: I1108 00:37:06.076350 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:06.082348 containerd[1471]: time="2025-11-08T00:37:06.080866147Z" level=info msg="StopPodSandbox for \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\"" Nov 8 00:37:06.082348 containerd[1471]: time="2025-11-08T00:37:06.081057836Z" level=info msg="Ensure that sandbox 0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff in task-service has been cleanup successfully" Nov 8 00:37:06.090274 kubelet[2549]: I1108 00:37:06.088277 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:06.096617 containerd[1471]: time="2025-11-08T00:37:06.095758291Z" level=info msg="StopPodSandbox for \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\"" Nov 8 00:37:06.096617 containerd[1471]: time="2025-11-08T00:37:06.096010021Z" level=info msg="Ensure that sandbox bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1 in task-service has been cleanup successfully" Nov 8 00:37:06.099333 kubelet[2549]: I1108 00:37:06.099092 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:06.100768 containerd[1471]: time="2025-11-08T00:37:06.100728267Z" level=info msg="StopPodSandbox for \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\"" Nov 8 00:37:06.100954 containerd[1471]: time="2025-11-08T00:37:06.100918446Z" level=info msg="Ensure that sandbox b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1 in task-service has been cleanup successfully" Nov 8 00:37:06.107865 kubelet[2549]: I1108 00:37:06.104022 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:06.107956 containerd[1471]: time="2025-11-08T00:37:06.104777535Z" level=info msg="StopPodSandbox for \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\"" Nov 8 00:37:06.107956 containerd[1471]: time="2025-11-08T00:37:06.104936124Z" level=info msg="Ensure that sandbox 16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1 in task-service has been cleanup successfully" Nov 8 00:37:06.109015 kubelet[2549]: I1108 00:37:06.108948 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:06.110859 containerd[1471]: time="2025-11-08T00:37:06.110525977Z" level=info msg="StopPodSandbox for \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\"" Nov 8 00:37:06.113584 containerd[1471]: time="2025-11-08T00:37:06.113526598Z" level=info msg="Ensure that sandbox 6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b in task-service has been cleanup successfully" Nov 8 00:37:06.119773 kubelet[2549]: I1108 00:37:06.119731 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:06.122286 containerd[1471]: time="2025-11-08T00:37:06.122198122Z" level=info msg="StopPodSandbox for \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\"" Nov 8 00:37:06.122619 containerd[1471]: time="2025-11-08T00:37:06.122379081Z" level=info msg="Ensure that sandbox 13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf in task-service has been cleanup successfully" Nov 8 00:37:06.157175 kubelet[2549]: I1108 00:37:06.157132 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:06.163906 containerd[1471]: time="2025-11-08T00:37:06.163850105Z" level=info msg="StopPodSandbox for \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\"" Nov 8 00:37:06.167562 containerd[1471]: time="2025-11-08T00:37:06.167517474Z" level=info msg="Ensure that sandbox cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502 in task-service has been cleanup successfully" Nov 8 00:37:06.190497 containerd[1471]: time="2025-11-08T00:37:06.190384804Z" level=error msg="StopPodSandbox for \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\" failed" error="failed to destroy network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.192842 kubelet[2549]: E1108 00:37:06.192797 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:06.192945 kubelet[2549]: E1108 00:37:06.192855 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff"} Nov 8 00:37:06.192945 kubelet[2549]: E1108 00:37:06.192909 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50e2147d-0531-46fa-b3e7-3b3b05f008fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.193050 kubelet[2549]: E1108 00:37:06.192943 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50e2147d-0531-46fa-b3e7-3b3b05f008fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fhv8n" podUID="50e2147d-0531-46fa-b3e7-3b3b05f008fd" Nov 8 00:37:06.201215 kubelet[2549]: I1108 00:37:06.201188 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:06.206306 containerd[1471]: time="2025-11-08T00:37:06.205828277Z" level=info msg="StopPodSandbox for \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\"" Nov 8 00:37:06.206306 containerd[1471]: time="2025-11-08T00:37:06.206017727Z" level=info msg="Ensure that sandbox e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764 in task-service has been cleanup successfully" Nov 8 00:37:06.231177 containerd[1471]: time="2025-11-08T00:37:06.231115011Z" level=error msg="StopPodSandbox for \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\" failed" error="failed to destroy network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.231794 kubelet[2549]: E1108 00:37:06.231758 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:06.232000 kubelet[2549]: E1108 00:37:06.231976 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b"} Nov 8 00:37:06.232178 kubelet[2549]: E1108 00:37:06.232102 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f16a8b2-c22c-42c4-a0b9-731351a537c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.232178 kubelet[2549]: E1108 00:37:06.232132 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f16a8b2-c22c-42c4-a0b9-731351a537c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:37:06.233189 containerd[1471]: time="2025-11-08T00:37:06.233160874Z" level=error msg="StopPodSandbox for \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\" failed" error="failed to destroy network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.233724 kubelet[2549]: E1108 00:37:06.233481 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:06.233724 kubelet[2549]: E1108 00:37:06.233638 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1"} Nov 8 00:37:06.233724 kubelet[2549]: E1108 00:37:06.233672 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.233724 kubelet[2549]: E1108 00:37:06.233697 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7579885fb4-6qtb5" podUID="0470eb38-6ef8-454a-af84-3dd964fdf6b9" Nov 8 00:37:06.244981 containerd[1471]: time="2025-11-08T00:37:06.244947669Z" level=error msg="StopPodSandbox for \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\" failed" error="failed to destroy network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.245297 kubelet[2549]: E1108 00:37:06.245194 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:06.245297 kubelet[2549]: E1108 00:37:06.245226 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1"} Nov 8 00:37:06.245297 kubelet[2549]: E1108 00:37:06.245251 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7ec371f-050b-4208-a3ac-8f708d9ed8b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.245297 kubelet[2549]: E1108 00:37:06.245272 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7ec371f-050b-4208-a3ac-8f708d9ed8b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:06.253051 containerd[1471]: time="2025-11-08T00:37:06.252603775Z" level=error msg="StopPodSandbox for \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\" failed" error="failed to destroy network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.253117 kubelet[2549]: E1108 00:37:06.252863 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:06.253117 kubelet[2549]: E1108 00:37:06.252893 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1"} Nov 8 00:37:06.253117 kubelet[2549]: E1108 00:37:06.252917 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e48afab-b056-4d85-9cc7-4c4bf819b790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.253117 kubelet[2549]: E1108 00:37:06.252938 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e48afab-b056-4d85-9cc7-4c4bf819b790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:06.264357 containerd[1471]: time="2025-11-08T00:37:06.264148261Z" level=error msg="StopPodSandbox for \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\" failed" error="failed to destroy network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.264656 kubelet[2549]: E1108 00:37:06.264477 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:06.264656 kubelet[2549]: E1108 00:37:06.264577 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502"} Nov 8 00:37:06.264656 kubelet[2549]: E1108 00:37:06.264606 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5ed425e-ae3a-4fee-9b79-13f79eee03b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.264656 kubelet[2549]: E1108 00:37:06.264627 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5ed425e-ae3a-4fee-9b79-13f79eee03b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:06.278984 containerd[1471]: time="2025-11-08T00:37:06.278919796Z" level=error msg="StopPodSandbox for \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\" failed" error="failed to destroy network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.280342 kubelet[2549]: E1108 00:37:06.279468 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:06.280342 kubelet[2549]: E1108 00:37:06.279561 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf"} Nov 8 00:37:06.280342 kubelet[2549]: E1108 00:37:06.279605 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b339edb0-297f-4caa-90a2-1e5e9c9f0583\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.280342 kubelet[2549]: E1108 00:37:06.279633 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b339edb0-297f-4caa-90a2-1e5e9c9f0583\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:37:06.285262 containerd[1471]: time="2025-11-08T00:37:06.285207676Z" level=error msg="StopPodSandbox for \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\" failed" error="failed to destroy network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:37:06.285715 kubelet[2549]: E1108 00:37:06.285506 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:06.285715 kubelet[2549]: E1108 00:37:06.285562 2549 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764"} Nov 8 00:37:06.285715 kubelet[2549]: E1108 00:37:06.285602 2549 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f5c159c-1100-4d8d-b4a2-0811154f10ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:37:06.285715 kubelet[2549]: E1108 00:37:06.285626 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f5c159c-1100-4d8d-b4a2-0811154f10ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gsg4q" podUID="8f5c159c-1100-4d8d-b4a2-0811154f10ae" Nov 8 00:37:08.730103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439943218.mount: Deactivated successfully. Nov 8 00:37:08.762025 containerd[1471]: time="2025-11-08T00:37:08.761972011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:08.762934 containerd[1471]: time="2025-11-08T00:37:08.762740689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:37:08.764484 containerd[1471]: time="2025-11-08T00:37:08.763369418Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:08.765925 containerd[1471]: time="2025-11-08T00:37:08.764967464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:37:08.765925 containerd[1471]: time="2025-11-08T00:37:08.765598833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.692313408s" Nov 8 00:37:08.765925 containerd[1471]: time="2025-11-08T00:37:08.765632673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:37:08.792233 containerd[1471]: time="2025-11-08T00:37:08.792203627Z" level=info msg="CreateContainer within sandbox \"acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:37:08.805629 containerd[1471]: time="2025-11-08T00:37:08.805592203Z" level=info msg="CreateContainer within sandbox \"acaebe5a3a60f6e3443aca42cab34a1bf48d691cfee327bd2e71159ca388fefd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9\"" Nov 8 00:37:08.806941 containerd[1471]: time="2025-11-08T00:37:08.806863840Z" level=info msg="StartContainer for \"9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9\"" Nov 8 00:37:08.838453 systemd[1]: Started cri-containerd-9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9.scope - libcontainer container 9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9. Nov 8 00:37:08.874901 containerd[1471]: time="2025-11-08T00:37:08.874868921Z" level=info msg="StartContainer for \"9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9\" returns successfully" Nov 8 00:37:08.971522 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:37:08.971648 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:37:09.065485 containerd[1471]: time="2025-11-08T00:37:09.064966425Z" level=info msg="StopPodSandbox for \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\"" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.148 [INFO][3728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.148 [INFO][3728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" iface="eth0" netns="/var/run/netns/cni-dc7249ef-3ce7-4105-431c-32a8a883f3f9" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.149 [INFO][3728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" iface="eth0" netns="/var/run/netns/cni-dc7249ef-3ce7-4105-431c-32a8a883f3f9" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.150 [INFO][3728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" iface="eth0" netns="/var/run/netns/cni-dc7249ef-3ce7-4105-431c-32a8a883f3f9" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.150 [INFO][3728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.150 [INFO][3728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.179 [INFO][3741] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.179 [INFO][3741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.179 [INFO][3741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.189 [WARNING][3741] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.189 [INFO][3741] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.190 [INFO][3741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:09.197950 containerd[1471]: 2025-11-08 00:37:09.195 [INFO][3728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:09.198877 containerd[1471]: time="2025-11-08T00:37:09.198146546Z" level=info msg="TearDown network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\" successfully" Nov 8 00:37:09.198877 containerd[1471]: time="2025-11-08T00:37:09.198177606Z" level=info msg="StopPodSandbox for \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\" returns successfully" Nov 8 00:37:09.213372 kubelet[2549]: E1108 00:37:09.213305 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:09.237057 kubelet[2549]: I1108 00:37:09.236855 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q28fg" podStartSLOduration=1.3824840919999999 podStartE2EDuration="10.23684149s" podCreationTimestamp="2025-11-08 00:36:59 +0000 UTC" firstStartedPulling="2025-11-08 00:36:59.912517352 +0000 UTC m=+20.034793413" lastFinishedPulling="2025-11-08 00:37:08.76687476 +0000 UTC m=+28.889150811" observedRunningTime="2025-11-08 00:37:09.235795962 +0000 UTC m=+29.358072013" watchObservedRunningTime="2025-11-08 00:37:09.23684149 +0000 UTC m=+29.359117541" Nov 8 00:37:09.258357 kubelet[2549]: I1108 00:37:09.257401 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-ca-bundle\") pod \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\" (UID: \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\") " Nov 8 00:37:09.258357 kubelet[2549]: I1108 00:37:09.257437 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk8gt\" (UniqueName: \"kubernetes.io/projected/0470eb38-6ef8-454a-af84-3dd964fdf6b9-kube-api-access-hk8gt\") pod \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\" (UID: \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\") " Nov 8 00:37:09.258357 kubelet[2549]: I1108 00:37:09.257454 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-backend-key-pair\") pod \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\" (UID: \"0470eb38-6ef8-454a-af84-3dd964fdf6b9\") " Nov 8 00:37:09.258357 kubelet[2549]: I1108 00:37:09.258096 2549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0470eb38-6ef8-454a-af84-3dd964fdf6b9" (UID: "0470eb38-6ef8-454a-af84-3dd964fdf6b9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:37:09.263391 kubelet[2549]: I1108 00:37:09.263369 2549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0470eb38-6ef8-454a-af84-3dd964fdf6b9-kube-api-access-hk8gt" (OuterVolumeSpecName: "kube-api-access-hk8gt") pod "0470eb38-6ef8-454a-af84-3dd964fdf6b9" (UID: "0470eb38-6ef8-454a-af84-3dd964fdf6b9"). InnerVolumeSpecName "kube-api-access-hk8gt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:37:09.264015 kubelet[2549]: I1108 00:37:09.263999 2549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0470eb38-6ef8-454a-af84-3dd964fdf6b9" (UID: "0470eb38-6ef8-454a-af84-3dd964fdf6b9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:37:09.358164 kubelet[2549]: I1108 00:37:09.358060 2549 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-ca-bundle\") on node \"172-239-57-24\" DevicePath \"\"" Nov 8 00:37:09.358164 kubelet[2549]: I1108 00:37:09.358091 2549 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hk8gt\" (UniqueName: \"kubernetes.io/projected/0470eb38-6ef8-454a-af84-3dd964fdf6b9-kube-api-access-hk8gt\") on node \"172-239-57-24\" DevicePath \"\"" Nov 8 00:37:09.358164 kubelet[2549]: I1108 00:37:09.358101 2549 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0470eb38-6ef8-454a-af84-3dd964fdf6b9-whisker-backend-key-pair\") on node \"172-239-57-24\" DevicePath \"\"" Nov 8 00:37:09.729602 systemd[1]: run-netns-cni\x2ddc7249ef\x2d3ce7\x2d4105\x2d431c\x2d32a8a883f3f9.mount: Deactivated successfully. Nov 8 00:37:09.729720 systemd[1]: var-lib-kubelet-pods-0470eb38\x2d6ef8\x2d454a\x2daf84\x2d3dd964fdf6b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhk8gt.mount: Deactivated successfully. Nov 8 00:37:09.729800 systemd[1]: var-lib-kubelet-pods-0470eb38\x2d6ef8\x2d454a\x2daf84\x2d3dd964fdf6b9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:37:09.978539 systemd[1]: Removed slice kubepods-besteffort-pod0470eb38_6ef8_454a_af84_3dd964fdf6b9.slice - libcontainer container kubepods-besteffort-pod0470eb38_6ef8_454a_af84_3dd964fdf6b9.slice. Nov 8 00:37:10.215665 kubelet[2549]: E1108 00:37:10.215635 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:10.244032 systemd[1]: run-containerd-runc-k8s.io-9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9-runc.6MCMlw.mount: Deactivated successfully. Nov 8 00:37:10.280578 systemd[1]: Created slice kubepods-besteffort-pode93c897e_2024_4417_8017_e4980e091fbc.slice - libcontainer container kubepods-besteffort-pode93c897e_2024_4417_8017_e4980e091fbc.slice. Nov 8 00:37:10.364045 kubelet[2549]: I1108 00:37:10.364004 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e93c897e-2024-4417-8017-e4980e091fbc-whisker-backend-key-pair\") pod \"whisker-5f9666476b-6d4dx\" (UID: \"e93c897e-2024-4417-8017-e4980e091fbc\") " pod="calico-system/whisker-5f9666476b-6d4dx" Nov 8 00:37:10.364045 kubelet[2549]: I1108 00:37:10.364045 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrhq\" (UniqueName: \"kubernetes.io/projected/e93c897e-2024-4417-8017-e4980e091fbc-kube-api-access-twrhq\") pod \"whisker-5f9666476b-6d4dx\" (UID: \"e93c897e-2024-4417-8017-e4980e091fbc\") " pod="calico-system/whisker-5f9666476b-6d4dx" Nov 8 00:37:10.364254 kubelet[2549]: I1108 00:37:10.364065 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e93c897e-2024-4417-8017-e4980e091fbc-whisker-ca-bundle\") pod \"whisker-5f9666476b-6d4dx\" (UID: \"e93c897e-2024-4417-8017-e4980e091fbc\") " pod="calico-system/whisker-5f9666476b-6d4dx" Nov 8 00:37:10.586910 containerd[1471]: time="2025-11-08T00:37:10.585930889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f9666476b-6d4dx,Uid:e93c897e-2024-4417-8017-e4980e091fbc,Namespace:calico-system,Attempt:0,}" Nov 8 00:37:10.771776 systemd-networkd[1389]: cali63f1e9654d8: Link UP Nov 8 00:37:10.772059 systemd-networkd[1389]: cali63f1e9654d8: Gained carrier Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.639 [INFO][3890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.654 [INFO][3890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0 whisker-5f9666476b- calico-system e93c897e-2024-4417-8017-e4980e091fbc 900 0 2025-11-08 00:37:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f9666476b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-57-24 whisker-5f9666476b-6d4dx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali63f1e9654d8 [] [] }} ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.654 [INFO][3890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.706 [INFO][3904] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" HandleID="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Workload="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.706 [INFO][3904] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" HandleID="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Workload="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f730), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-24", "pod":"whisker-5f9666476b-6d4dx", "timestamp":"2025-11-08 00:37:10.706241308 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.706 [INFO][3904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.706 [INFO][3904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.706 [INFO][3904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.716 [INFO][3904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.721 [INFO][3904] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.725 [INFO][3904] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.727 [INFO][3904] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.737 [INFO][3904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.737 [INFO][3904] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.741 [INFO][3904] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327 Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.746 [INFO][3904] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.753 [INFO][3904] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.1/26] block=192.168.114.0/26 handle="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.753 [INFO][3904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.1/26] handle="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" host="172-239-57-24" Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.753 [INFO][3904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:10.795377 containerd[1471]: 2025-11-08 00:37:10.753 [INFO][3904] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.1/26] IPv6=[] ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" HandleID="k8s-pod-network.c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Workload="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" Nov 8 00:37:10.796097 containerd[1471]: 2025-11-08 00:37:10.758 [INFO][3890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0", GenerateName:"whisker-5f9666476b-", Namespace:"calico-system", SelfLink:"", UID:"e93c897e-2024-4417-8017-e4980e091fbc", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f9666476b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"whisker-5f9666476b-6d4dx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali63f1e9654d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:10.796097 containerd[1471]: 2025-11-08 00:37:10.759 [INFO][3890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.1/32] ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" Nov 8 00:37:10.796097 containerd[1471]: 2025-11-08 00:37:10.759 [INFO][3890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63f1e9654d8 ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" Nov 8 00:37:10.796097 containerd[1471]: 2025-11-08 00:37:10.772 [INFO][3890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" Nov 8 00:37:10.796097 containerd[1471]: 2025-11-08 00:37:10.773 [INFO][3890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0", GenerateName:"whisker-5f9666476b-", Namespace:"calico-system", SelfLink:"", UID:"e93c897e-2024-4417-8017-e4980e091fbc", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f9666476b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327", Pod:"whisker-5f9666476b-6d4dx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali63f1e9654d8", MAC:"d2:cc:a0:b8:85:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:10.796097 containerd[1471]: 2025-11-08 00:37:10.788 [INFO][3890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327" Namespace="calico-system" Pod="whisker-5f9666476b-6d4dx" WorkloadEndpoint="172--239--57--24-k8s-whisker--5f9666476b--6d4dx-eth0" Nov 8 00:37:10.821513 containerd[1471]: time="2025-11-08T00:37:10.821419157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:10.821703 containerd[1471]: time="2025-11-08T00:37:10.821664577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:10.822733 containerd[1471]: time="2025-11-08T00:37:10.822615355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:10.823806 containerd[1471]: time="2025-11-08T00:37:10.823110234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:10.869539 systemd[1]: Started cri-containerd-c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327.scope - libcontainer container c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327. Nov 8 00:37:10.911646 containerd[1471]: time="2025-11-08T00:37:10.911569417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f9666476b-6d4dx,Uid:e93c897e-2024-4417-8017-e4980e091fbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3482d6b6a91228a093d7226dada40b381ba61c7bb3f3386dcb42d7a9c574327\"" Nov 8 00:37:10.914303 containerd[1471]: time="2025-11-08T00:37:10.914081732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:37:11.039993 containerd[1471]: time="2025-11-08T00:37:11.039938628Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:11.041093 containerd[1471]: time="2025-11-08T00:37:11.041054947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:37:11.041205 containerd[1471]: time="2025-11-08T00:37:11.041122917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:37:11.041401 kubelet[2549]: E1108 00:37:11.041369 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:37:11.041476 kubelet[2549]: E1108 00:37:11.041418 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:37:11.041596 kubelet[2549]: E1108 00:37:11.041553 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e094445afe4d4a0db1ebe2df45d03ef3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:11.043712 containerd[1471]: time="2025-11-08T00:37:11.043649471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:37:11.183545 containerd[1471]: time="2025-11-08T00:37:11.183489892Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:11.184346 containerd[1471]: time="2025-11-08T00:37:11.184287621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:37:11.184436 containerd[1471]: time="2025-11-08T00:37:11.184393251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:37:11.184580 kubelet[2549]: E1108 00:37:11.184542 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:37:11.184620 kubelet[2549]: E1108 00:37:11.184589 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:37:11.184784 kubelet[2549]: E1108 00:37:11.184714 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:11.186110 kubelet[2549]: E1108 00:37:11.186049 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:37:11.220787 kubelet[2549]: E1108 00:37:11.220657 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:37:11.974340 kubelet[2549]: I1108 00:37:11.974289 2549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0470eb38-6ef8-454a-af84-3dd964fdf6b9" path="/var/lib/kubelet/pods/0470eb38-6ef8-454a-af84-3dd964fdf6b9/volumes" Nov 8 00:37:12.037628 systemd-networkd[1389]: cali63f1e9654d8: Gained IPv6LL Nov 8 00:37:12.221597 kubelet[2549]: E1108 00:37:12.221466 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:37:17.972987 containerd[1471]: time="2025-11-08T00:37:17.972250398Z" level=info msg="StopPodSandbox for \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\"" Nov 8 00:37:17.975687 containerd[1471]: time="2025-11-08T00:37:17.975623835Z" level=info msg="StopPodSandbox for \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\"" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.028 [INFO][4113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.028 [INFO][4113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" iface="eth0" netns="/var/run/netns/cni-a5f2880e-2e12-2b18-e221-276aff628372" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.029 [INFO][4113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" iface="eth0" netns="/var/run/netns/cni-a5f2880e-2e12-2b18-e221-276aff628372" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.030 [INFO][4113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" iface="eth0" netns="/var/run/netns/cni-a5f2880e-2e12-2b18-e221-276aff628372" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.030 [INFO][4113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.030 [INFO][4113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.079 [INFO][4131] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.079 [INFO][4131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.079 [INFO][4131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.088 [WARNING][4131] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.088 [INFO][4131] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.090 [INFO][4131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:18.096425 containerd[1471]: 2025-11-08 00:37:18.092 [INFO][4113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:18.100289 containerd[1471]: time="2025-11-08T00:37:18.099961948Z" level=info msg="TearDown network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\" successfully" Nov 8 00:37:18.100289 containerd[1471]: time="2025-11-08T00:37:18.099993748Z" level=info msg="StopPodSandbox for \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\" returns successfully" Nov 8 00:37:18.102420 containerd[1471]: time="2025-11-08T00:37:18.102379307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769c89c5c9-znhjq,Uid:9f16a8b2-c22c-42c4-a0b9-731351a537c7,Namespace:calico-system,Attempt:1,}" Nov 8 00:37:18.107157 systemd[1]: run-netns-cni\x2da5f2880e\x2d2e12\x2d2b18\x2de221\x2d276aff628372.mount: Deactivated successfully. Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.037 [INFO][4121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.037 [INFO][4121] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" iface="eth0" netns="/var/run/netns/cni-0b8e4436-db0c-c699-09ba-e20313749c62" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.038 [INFO][4121] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" iface="eth0" netns="/var/run/netns/cni-0b8e4436-db0c-c699-09ba-e20313749c62" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.039 [INFO][4121] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" iface="eth0" netns="/var/run/netns/cni-0b8e4436-db0c-c699-09ba-e20313749c62" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.039 [INFO][4121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.039 [INFO][4121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.084 [INFO][4140] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.084 [INFO][4140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.090 [INFO][4140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.101 [WARNING][4140] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.102 [INFO][4140] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.104 [INFO][4140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:18.119880 containerd[1471]: 2025-11-08 00:37:18.112 [INFO][4121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:18.119880 containerd[1471]: time="2025-11-08T00:37:18.119507217Z" level=info msg="TearDown network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\" successfully" Nov 8 00:37:18.119880 containerd[1471]: time="2025-11-08T00:37:18.119523967Z" level=info msg="StopPodSandbox for \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\" returns successfully" Nov 8 00:37:18.120235 containerd[1471]: time="2025-11-08T00:37:18.119970576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-r4b4p,Uid:5e48afab-b056-4d85-9cc7-4c4bf819b790,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:37:18.128963 systemd[1]: run-netns-cni\x2d0b8e4436\x2ddb0c\x2dc699\x2d09ba\x2de20313749c62.mount: Deactivated successfully. Nov 8 00:37:18.247566 systemd-networkd[1389]: cali157ad780608: Link UP Nov 8 00:37:18.249411 systemd-networkd[1389]: cali157ad780608: Gained carrier Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.171 [INFO][4157] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.182 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0 calico-apiserver-5cc866c96c- calico-apiserver 5e48afab-b056-4d85-9cc7-4c4bf819b790 946 0 2025-11-08 00:36:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc866c96c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-57-24 calico-apiserver-5cc866c96c-r4b4p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali157ad780608 [] [] }} ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.182 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.208 [INFO][4172] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" HandleID="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.209 [INFO][4172] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" HandleID="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-57-24", "pod":"calico-apiserver-5cc866c96c-r4b4p", "timestamp":"2025-11-08 00:37:18.208970004 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.209 [INFO][4172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.209 [INFO][4172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.209 [INFO][4172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.215 [INFO][4172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.219 [INFO][4172] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.223 [INFO][4172] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.224 [INFO][4172] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.226 [INFO][4172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.226 [INFO][4172] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.227 [INFO][4172] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.230 [INFO][4172] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.235 [INFO][4172] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.2/26] block=192.168.114.0/26 handle="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.235 [INFO][4172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.2/26] handle="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" host="172-239-57-24" Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.235 [INFO][4172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:18.259713 containerd[1471]: 2025-11-08 00:37:18.235 [INFO][4172] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.2/26] IPv6=[] ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" HandleID="k8s-pod-network.3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.261540 containerd[1471]: 2025-11-08 00:37:18.238 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e48afab-b056-4d85-9cc7-4c4bf819b790", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"calico-apiserver-5cc866c96c-r4b4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali157ad780608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:18.261540 containerd[1471]: 2025-11-08 00:37:18.238 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.2/32] ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.261540 containerd[1471]: 2025-11-08 00:37:18.238 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali157ad780608 ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.261540 containerd[1471]: 2025-11-08 00:37:18.246 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.261540 containerd[1471]: 2025-11-08 00:37:18.246 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e48afab-b056-4d85-9cc7-4c4bf819b790", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e", Pod:"calico-apiserver-5cc866c96c-r4b4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali157ad780608", MAC:"1e:40:2e:08:5e:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:18.261540 containerd[1471]: 2025-11-08 00:37:18.255 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-r4b4p" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:18.281147 containerd[1471]: time="2025-11-08T00:37:18.280538321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:18.281147 containerd[1471]: time="2025-11-08T00:37:18.280641591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:18.281497 containerd[1471]: time="2025-11-08T00:37:18.281265651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:18.281497 containerd[1471]: time="2025-11-08T00:37:18.281363031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:18.302457 systemd[1]: Started cri-containerd-3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e.scope - libcontainer container 3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e. Nov 8 00:37:18.356210 systemd-networkd[1389]: cali0d1cef27536: Link UP Nov 8 00:37:18.360126 systemd-networkd[1389]: cali0d1cef27536: Gained carrier Nov 8 00:37:18.374251 containerd[1471]: time="2025-11-08T00:37:18.374100376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-r4b4p,Uid:5e48afab-b056-4d85-9cc7-4c4bf819b790,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e\"" Nov 8 00:37:18.379467 containerd[1471]: time="2025-11-08T00:37:18.379435063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.169 [INFO][4149] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.182 [INFO][4149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0 calico-kube-controllers-769c89c5c9- calico-system 9f16a8b2-c22c-42c4-a0b9-731351a537c7 945 0 2025-11-08 00:36:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:769c89c5c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-57-24 calico-kube-controllers-769c89c5c9-znhjq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0d1cef27536 [] [] }} ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.182 [INFO][4149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.215 [INFO][4177] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" HandleID="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.215 [INFO][4177] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" HandleID="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cafe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-24", "pod":"calico-kube-controllers-769c89c5c9-znhjq", "timestamp":"2025-11-08 00:37:18.21500477 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.215 [INFO][4177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.235 [INFO][4177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.235 [INFO][4177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.315 [INFO][4177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.320 [INFO][4177] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.324 [INFO][4177] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.326 [INFO][4177] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.328 [INFO][4177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.328 [INFO][4177] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.329 [INFO][4177] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654 Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.337 [INFO][4177] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.341 [INFO][4177] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.3/26] block=192.168.114.0/26 handle="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.341 [INFO][4177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.3/26] handle="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" host="172-239-57-24" Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.341 [INFO][4177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:18.388844 containerd[1471]: 2025-11-08 00:37:18.341 [INFO][4177] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.3/26] IPv6=[] ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" HandleID="k8s-pod-network.d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.389404 containerd[1471]: 2025-11-08 00:37:18.349 [INFO][4149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0", GenerateName:"calico-kube-controllers-769c89c5c9-", Namespace:"calico-system", SelfLink:"", UID:"9f16a8b2-c22c-42c4-a0b9-731351a537c7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769c89c5c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"calico-kube-controllers-769c89c5c9-znhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d1cef27536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:18.389404 containerd[1471]: 2025-11-08 00:37:18.350 [INFO][4149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.3/32] ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.389404 containerd[1471]: 2025-11-08 00:37:18.350 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d1cef27536 ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.389404 containerd[1471]: 2025-11-08 00:37:18.363 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.389404 containerd[1471]: 2025-11-08 00:37:18.371 [INFO][4149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0", GenerateName:"calico-kube-controllers-769c89c5c9-", Namespace:"calico-system", SelfLink:"", UID:"9f16a8b2-c22c-42c4-a0b9-731351a537c7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769c89c5c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654", Pod:"calico-kube-controllers-769c89c5c9-znhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d1cef27536", MAC:"d2:28:59:9b:16:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:18.389404 containerd[1471]: 2025-11-08 00:37:18.382 [INFO][4149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654" Namespace="calico-system" Pod="calico-kube-controllers-769c89c5c9-znhjq" WorkloadEndpoint="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:18.461171 containerd[1471]: time="2025-11-08T00:37:18.460160035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:18.461171 containerd[1471]: time="2025-11-08T00:37:18.460206016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:18.461171 containerd[1471]: time="2025-11-08T00:37:18.460229336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:18.461171 containerd[1471]: time="2025-11-08T00:37:18.460307206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:18.494454 systemd[1]: Started cri-containerd-d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654.scope - libcontainer container d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654. Nov 8 00:37:18.523211 containerd[1471]: time="2025-11-08T00:37:18.523182018Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:18.525157 containerd[1471]: time="2025-11-08T00:37:18.524660567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:18.526396 containerd[1471]: time="2025-11-08T00:37:18.525606197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:18.527531 kubelet[2549]: E1108 00:37:18.527493 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:18.528983 kubelet[2549]: E1108 00:37:18.528644 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:18.528983 kubelet[2549]: E1108 00:37:18.528778 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jf2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-r4b4p_calico-apiserver(5e48afab-b056-4d85-9cc7-4c4bf819b790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:18.531495 kubelet[2549]: E1108 00:37:18.530549 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:18.539892 containerd[1471]: time="2025-11-08T00:37:18.539866738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769c89c5c9-znhjq,Uid:9f16a8b2-c22c-42c4-a0b9-731351a537c7,Namespace:calico-system,Attempt:1,} returns sandbox id \"d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654\"" Nov 8 00:37:18.541363 containerd[1471]: time="2025-11-08T00:37:18.541260937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:37:18.678293 containerd[1471]: time="2025-11-08T00:37:18.678264277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:18.679557 containerd[1471]: time="2025-11-08T00:37:18.679498605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:37:18.679658 containerd[1471]: time="2025-11-08T00:37:18.679503325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:37:18.679836 kubelet[2549]: E1108 00:37:18.679791 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:37:18.679836 kubelet[2549]: E1108 00:37:18.679830 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:37:18.680076 kubelet[2549]: E1108 00:37:18.679929 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pph6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-769c89c5c9-znhjq_calico-system(9f16a8b2-c22c-42c4-a0b9-731351a537c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:18.681233 kubelet[2549]: E1108 00:37:18.681206 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:37:18.971853 containerd[1471]: time="2025-11-08T00:37:18.971767833Z" level=info msg="StopPodSandbox for \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\"" Nov 8 00:37:18.972034 containerd[1471]: time="2025-11-08T00:37:18.971949133Z" level=info msg="StopPodSandbox for \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\"" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.032 [INFO][4317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.035 [INFO][4317] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" iface="eth0" netns="/var/run/netns/cni-8eefffbf-d118-1d3b-fe82-c706c1645c67" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.036 [INFO][4317] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" iface="eth0" netns="/var/run/netns/cni-8eefffbf-d118-1d3b-fe82-c706c1645c67" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.037 [INFO][4317] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" iface="eth0" netns="/var/run/netns/cni-8eefffbf-d118-1d3b-fe82-c706c1645c67" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.037 [INFO][4317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.037 [INFO][4317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.065 [INFO][4331] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.065 [INFO][4331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.066 [INFO][4331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.071 [WARNING][4331] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.071 [INFO][4331] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.072 [INFO][4331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:19.076859 containerd[1471]: 2025-11-08 00:37:19.074 [INFO][4317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:19.078386 containerd[1471]: time="2025-11-08T00:37:19.077686231Z" level=info msg="TearDown network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\" successfully" Nov 8 00:37:19.078386 containerd[1471]: time="2025-11-08T00:37:19.077713011Z" level=info msg="StopPodSandbox for \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\" returns successfully" Nov 8 00:37:19.079056 kubelet[2549]: E1108 00:37:19.078943 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:19.080619 containerd[1471]: time="2025-11-08T00:37:19.080228999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gsg4q,Uid:8f5c159c-1100-4d8d-b4a2-0811154f10ae,Namespace:kube-system,Attempt:1,}" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.039 [INFO][4318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.039 [INFO][4318] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" iface="eth0" netns="/var/run/netns/cni-b85cd2b9-fb09-0e54-4210-d6deff0b2b53" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.039 [INFO][4318] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" iface="eth0" netns="/var/run/netns/cni-b85cd2b9-fb09-0e54-4210-d6deff0b2b53" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.039 [INFO][4318] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" iface="eth0" netns="/var/run/netns/cni-b85cd2b9-fb09-0e54-4210-d6deff0b2b53" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.040 [INFO][4318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.040 [INFO][4318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.065 [INFO][4333] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.066 [INFO][4333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.072 [INFO][4333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.077 [WARNING][4333] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.077 [INFO][4333] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.079 [INFO][4333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:19.085479 containerd[1471]: 2025-11-08 00:37:19.082 [INFO][4318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:19.085749 containerd[1471]: time="2025-11-08T00:37:19.085587387Z" level=info msg="TearDown network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\" successfully" Nov 8 00:37:19.085749 containerd[1471]: time="2025-11-08T00:37:19.085607117Z" level=info msg="StopPodSandbox for \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\" returns successfully" Nov 8 00:37:19.086074 containerd[1471]: time="2025-11-08T00:37:19.086039846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-jlfgk,Uid:b7ec371f-050b-4208-a3ac-8f708d9ed8b9,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:37:19.106105 systemd[1]: run-netns-cni\x2db85cd2b9\x2dfb09\x2d0e54\x2d4210\x2dd6deff0b2b53.mount: Deactivated successfully. Nov 8 00:37:19.106599 systemd[1]: run-netns-cni\x2d8eefffbf\x2dd118\x2d1d3b\x2dfe82\x2dc706c1645c67.mount: Deactivated successfully. Nov 8 00:37:19.236161 systemd-networkd[1389]: cali199d41bb827: Link UP Nov 8 00:37:19.240688 systemd-networkd[1389]: cali199d41bb827: Gained carrier Nov 8 00:37:19.242602 kubelet[2549]: E1108 00:37:19.242573 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:19.249010 kubelet[2549]: E1108 00:37:19.248987 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.130 [INFO][4352] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.140 [INFO][4352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0 calico-apiserver-5cc866c96c- calico-apiserver b7ec371f-050b-4208-a3ac-8f708d9ed8b9 963 0 2025-11-08 00:36:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc866c96c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-57-24 calico-apiserver-5cc866c96c-jlfgk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali199d41bb827 [] [] }} ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.140 [INFO][4352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.190 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" HandleID="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.191 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" HandleID="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-57-24", "pod":"calico-apiserver-5cc866c96c-jlfgk", "timestamp":"2025-11-08 00:37:19.190851518 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.191 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.191 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.191 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.199 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.204 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.208 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.211 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.213 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.213 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.214 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.218 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.222 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.4/26] block=192.168.114.0/26 handle="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.222 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.4/26] handle="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" host="172-239-57-24" Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.222 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:19.260450 containerd[1471]: 2025-11-08 00:37:19.222 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.4/26] IPv6=[] ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" HandleID="k8s-pod-network.2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.260931 containerd[1471]: 2025-11-08 00:37:19.226 [INFO][4352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7ec371f-050b-4208-a3ac-8f708d9ed8b9", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"calico-apiserver-5cc866c96c-jlfgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali199d41bb827", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:19.260931 containerd[1471]: 2025-11-08 00:37:19.226 [INFO][4352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.4/32] ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.260931 containerd[1471]: 2025-11-08 00:37:19.226 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali199d41bb827 ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.260931 containerd[1471]: 2025-11-08 00:37:19.247 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.260931 containerd[1471]: 2025-11-08 00:37:19.247 [INFO][4352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7ec371f-050b-4208-a3ac-8f708d9ed8b9", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be", Pod:"calico-apiserver-5cc866c96c-jlfgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali199d41bb827", MAC:"da:a3:c1:3d:84:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:19.260931 containerd[1471]: 2025-11-08 00:37:19.258 [INFO][4352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be" Namespace="calico-apiserver" Pod="calico-apiserver-5cc866c96c-jlfgk" WorkloadEndpoint="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:19.290066 containerd[1471]: time="2025-11-08T00:37:19.289982213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:19.290551 containerd[1471]: time="2025-11-08T00:37:19.290241682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:19.290691 containerd[1471]: time="2025-11-08T00:37:19.290632562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:19.291855 containerd[1471]: time="2025-11-08T00:37:19.291822682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:19.324522 systemd[1]: Started cri-containerd-2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be.scope - libcontainer container 2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be. Nov 8 00:37:19.349806 systemd-networkd[1389]: calicc54c87f8e9: Link UP Nov 8 00:37:19.350005 systemd-networkd[1389]: calicc54c87f8e9: Gained carrier Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.128 [INFO][4346] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.138 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0 coredns-674b8bbfcf- kube-system 8f5c159c-1100-4d8d-b4a2-0811154f10ae 962 0 2025-11-08 00:36:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-57-24 coredns-674b8bbfcf-gsg4q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc54c87f8e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.138 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.192 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" HandleID="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.192 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" HandleID="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb7d0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-57-24", "pod":"coredns-674b8bbfcf-gsg4q", "timestamp":"2025-11-08 00:37:19.192676307 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.193 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.223 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.223 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.301 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.310 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.322 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.325 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.328 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.328 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.330 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3 Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.334 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.338 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.5/26] block=192.168.114.0/26 handle="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.338 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.5/26] handle="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" host="172-239-57-24" Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.338 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:19.365112 containerd[1471]: 2025-11-08 00:37:19.338 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.5/26] IPv6=[] ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" HandleID="k8s-pod-network.23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.365969 containerd[1471]: 2025-11-08 00:37:19.341 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f5c159c-1100-4d8d-b4a2-0811154f10ae", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"coredns-674b8bbfcf-gsg4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc54c87f8e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:19.365969 containerd[1471]: 2025-11-08 00:37:19.341 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.5/32] ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.365969 containerd[1471]: 2025-11-08 00:37:19.341 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc54c87f8e9 ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.365969 containerd[1471]: 2025-11-08 00:37:19.349 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.365969 containerd[1471]: 2025-11-08 00:37:19.350 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f5c159c-1100-4d8d-b4a2-0811154f10ae", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3", Pod:"coredns-674b8bbfcf-gsg4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc54c87f8e9", MAC:"96:c0:1c:76:89:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:19.365969 containerd[1471]: 2025-11-08 00:37:19.361 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gsg4q" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:19.388985 containerd[1471]: time="2025-11-08T00:37:19.388901197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:19.389296 containerd[1471]: time="2025-11-08T00:37:19.388949958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:19.389296 containerd[1471]: time="2025-11-08T00:37:19.388964878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:19.389296 containerd[1471]: time="2025-11-08T00:37:19.389069718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:19.397717 systemd-networkd[1389]: cali157ad780608: Gained IPv6LL Nov 8 00:37:19.418678 containerd[1471]: time="2025-11-08T00:37:19.418647094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc866c96c-jlfgk,Uid:b7ec371f-050b-4208-a3ac-8f708d9ed8b9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be\"" Nov 8 00:37:19.421254 containerd[1471]: time="2025-11-08T00:37:19.421229803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:19.424765 systemd[1]: Started cri-containerd-23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3.scope - libcontainer container 23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3. Nov 8 00:37:19.461600 containerd[1471]: time="2025-11-08T00:37:19.461569773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gsg4q,Uid:8f5c159c-1100-4d8d-b4a2-0811154f10ae,Namespace:kube-system,Attempt:1,} returns sandbox id \"23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3\"" Nov 8 00:37:19.462263 kubelet[2549]: E1108 00:37:19.462243 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:19.466519 containerd[1471]: time="2025-11-08T00:37:19.466433502Z" level=info msg="CreateContainer within sandbox \"23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:37:19.475647 containerd[1471]: time="2025-11-08T00:37:19.475574577Z" level=info msg="CreateContainer within sandbox \"23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b30d29e68cdf4c41c6938fc63caccdc04c1ed087d7e17cf3c5415b12091f615\"" Nov 8 00:37:19.476081 containerd[1471]: time="2025-11-08T00:37:19.476020937Z" level=info msg="StartContainer for \"5b30d29e68cdf4c41c6938fc63caccdc04c1ed087d7e17cf3c5415b12091f615\"" Nov 8 00:37:19.503952 systemd[1]: Started cri-containerd-5b30d29e68cdf4c41c6938fc63caccdc04c1ed087d7e17cf3c5415b12091f615.scope - libcontainer container 5b30d29e68cdf4c41c6938fc63caccdc04c1ed087d7e17cf3c5415b12091f615. Nov 8 00:37:19.543011 containerd[1471]: time="2025-11-08T00:37:19.542904266Z" level=info msg="StartContainer for \"5b30d29e68cdf4c41c6938fc63caccdc04c1ed087d7e17cf3c5415b12091f615\" returns successfully" Nov 8 00:37:19.552724 containerd[1471]: time="2025-11-08T00:37:19.552698582Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:19.553608 containerd[1471]: time="2025-11-08T00:37:19.553510261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:19.553608 containerd[1471]: time="2025-11-08T00:37:19.553564411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:19.553769 kubelet[2549]: E1108 00:37:19.553722 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:19.554072 kubelet[2549]: E1108 00:37:19.553775 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:19.554072 kubelet[2549]: E1108 00:37:19.553885 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j488h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-jlfgk_calico-apiserver(b7ec371f-050b-4208-a3ac-8f708d9ed8b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:19.555497 kubelet[2549]: E1108 00:37:19.555425 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:19.589552 systemd-networkd[1389]: cali0d1cef27536: Gained IPv6LL Nov 8 00:37:19.972094 containerd[1471]: time="2025-11-08T00:37:19.971706190Z" level=info msg="StopPodSandbox for \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\"" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.014 [INFO][4545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.015 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" iface="eth0" netns="/var/run/netns/cni-054b4dec-3a96-853b-e1f2-f8a49083595c" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.015 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" iface="eth0" netns="/var/run/netns/cni-054b4dec-3a96-853b-e1f2-f8a49083595c" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.015 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" iface="eth0" netns="/var/run/netns/cni-054b4dec-3a96-853b-e1f2-f8a49083595c" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.015 [INFO][4545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.015 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.038 [INFO][4552] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.038 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.038 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.043 [WARNING][4552] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.043 [INFO][4552] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.044 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:20.048986 containerd[1471]: 2025-11-08 00:37:20.046 [INFO][4545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:20.049926 containerd[1471]: time="2025-11-08T00:37:20.049152410Z" level=info msg="TearDown network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\" successfully" Nov 8 00:37:20.049926 containerd[1471]: time="2025-11-08T00:37:20.049177830Z" level=info msg="StopPodSandbox for \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\" returns successfully" Nov 8 00:37:20.050490 containerd[1471]: time="2025-11-08T00:37:20.050451400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66pk,Uid:e5ed425e-ae3a-4fee-9b79-13f79eee03b3,Namespace:calico-system,Attempt:1,}" Nov 8 00:37:20.104908 systemd[1]: run-containerd-runc-k8s.io-23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3-runc.Ba4yYB.mount: Deactivated successfully. Nov 8 00:37:20.105108 systemd[1]: run-netns-cni\x2d054b4dec\x2d3a96\x2d853b\x2de1f2\x2df8a49083595c.mount: Deactivated successfully. Nov 8 00:37:20.165470 systemd-networkd[1389]: cali73423ff2226: Link UP Nov 8 00:37:20.165785 systemd-networkd[1389]: cali73423ff2226: Gained carrier Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.079 [INFO][4558] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.089 [INFO][4558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-csi--node--driver--w66pk-eth0 csi-node-driver- calico-system e5ed425e-ae3a-4fee-9b79-13f79eee03b3 990 0 2025-11-08 00:36:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-57-24 csi-node-driver-w66pk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali73423ff2226 [] [] }} ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.089 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.126 [INFO][4570] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" HandleID="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.127 [INFO][4570] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" HandleID="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-24", "pod":"csi-node-driver-w66pk", "timestamp":"2025-11-08 00:37:20.126993514 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.127 [INFO][4570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.127 [INFO][4570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.127 [INFO][4570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.133 [INFO][4570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.139 [INFO][4570] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.143 [INFO][4570] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.144 [INFO][4570] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.146 [INFO][4570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.146 [INFO][4570] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.148 [INFO][4570] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4 Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.151 [INFO][4570] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.156 [INFO][4570] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.6/26] block=192.168.114.0/26 handle="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.156 [INFO][4570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.6/26] handle="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" host="172-239-57-24" Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.156 [INFO][4570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:20.179488 containerd[1471]: 2025-11-08 00:37:20.156 [INFO][4570] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.6/26] IPv6=[] ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" HandleID="k8s-pod-network.fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.180406 containerd[1471]: 2025-11-08 00:37:20.159 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-csi--node--driver--w66pk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5ed425e-ae3a-4fee-9b79-13f79eee03b3", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"csi-node-driver-w66pk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73423ff2226", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:20.180406 containerd[1471]: 2025-11-08 00:37:20.159 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.6/32] ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.180406 containerd[1471]: 2025-11-08 00:37:20.159 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73423ff2226 ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.180406 containerd[1471]: 2025-11-08 00:37:20.167 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.180406 containerd[1471]: 2025-11-08 00:37:20.167 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-csi--node--driver--w66pk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5ed425e-ae3a-4fee-9b79-13f79eee03b3", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4", Pod:"csi-node-driver-w66pk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73423ff2226", MAC:"6e:5c:76:78:2b:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:20.180406 containerd[1471]: 2025-11-08 00:37:20.176 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4" Namespace="calico-system" Pod="csi-node-driver-w66pk" WorkloadEndpoint="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:20.199253 containerd[1471]: time="2025-11-08T00:37:20.198953489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:20.199253 containerd[1471]: time="2025-11-08T00:37:20.199011040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:20.199253 containerd[1471]: time="2025-11-08T00:37:20.199025510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:20.199253 containerd[1471]: time="2025-11-08T00:37:20.199107320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:20.224467 systemd[1]: Started cri-containerd-fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4.scope - libcontainer container fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4. Nov 8 00:37:20.251790 kubelet[2549]: E1108 00:37:20.251740 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:20.256909 containerd[1471]: time="2025-11-08T00:37:20.256693050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66pk,Uid:e5ed425e-ae3a-4fee-9b79-13f79eee03b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4\"" Nov 8 00:37:20.261264 kubelet[2549]: E1108 00:37:20.261190 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:37:20.262519 containerd[1471]: time="2025-11-08T00:37:20.262266468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:37:20.262574 kubelet[2549]: E1108 00:37:20.261640 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:20.263304 kubelet[2549]: E1108 00:37:20.263179 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:20.268068 kubelet[2549]: I1108 00:37:20.267814 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gsg4q" podStartSLOduration=33.267802077 podStartE2EDuration="33.267802077s" podCreationTimestamp="2025-11-08 00:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:37:20.266159127 +0000 UTC m=+40.388435188" watchObservedRunningTime="2025-11-08 00:37:20.267802077 +0000 UTC m=+40.390078128" Nov 8 00:37:20.405993 containerd[1471]: time="2025-11-08T00:37:20.405950050Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:20.406735 containerd[1471]: time="2025-11-08T00:37:20.406704629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:37:20.406870 containerd[1471]: time="2025-11-08T00:37:20.406780439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:37:20.406932 kubelet[2549]: E1108 00:37:20.406890 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:20.407011 kubelet[2549]: E1108 00:37:20.406941 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:20.407090 kubelet[2549]: E1108 00:37:20.407043 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:20.409964 containerd[1471]: time="2025-11-08T00:37:20.409560938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:37:20.535966 containerd[1471]: time="2025-11-08T00:37:20.535612446Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:20.537647 containerd[1471]: time="2025-11-08T00:37:20.537597285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:37:20.538374 containerd[1471]: time="2025-11-08T00:37:20.537656325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:37:20.538603 kubelet[2549]: E1108 00:37:20.537902 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:20.538603 kubelet[2549]: E1108 00:37:20.538144 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:20.538603 kubelet[2549]: E1108 00:37:20.538262 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:20.539700 kubelet[2549]: E1108 00:37:20.539662 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:20.869556 systemd-networkd[1389]: cali199d41bb827: Gained IPv6LL Nov 8 00:37:20.979589 containerd[1471]: time="2025-11-08T00:37:20.979218707Z" level=info msg="StopPodSandbox for \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\"" Nov 8 00:37:20.979649 containerd[1471]: time="2025-11-08T00:37:20.979427917Z" level=info msg="StopPodSandbox for \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\"" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.041 [INFO][4669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.042 [INFO][4669] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" iface="eth0" netns="/var/run/netns/cni-4c3b4562-994b-f2fe-4994-fac56747c497" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.043 [INFO][4669] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" iface="eth0" netns="/var/run/netns/cni-4c3b4562-994b-f2fe-4994-fac56747c497" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.044 [INFO][4669] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" iface="eth0" netns="/var/run/netns/cni-4c3b4562-994b-f2fe-4994-fac56747c497" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.045 [INFO][4669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.045 [INFO][4669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.078 [INFO][4683] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.078 [INFO][4683] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.078 [INFO][4683] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.083 [WARNING][4683] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.083 [INFO][4683] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.084 [INFO][4683] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:21.088551 containerd[1471]: 2025-11-08 00:37:21.086 [INFO][4669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:21.091840 containerd[1471]: time="2025-11-08T00:37:21.091439849Z" level=info msg="TearDown network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\" successfully" Nov 8 00:37:21.092162 containerd[1471]: time="2025-11-08T00:37:21.091468709Z" level=info msg="StopPodSandbox for \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\" returns successfully" Nov 8 00:37:21.094626 systemd[1]: run-netns-cni\x2d4c3b4562\x2d994b\x2df2fe\x2d4994\x2dfac56747c497.mount: Deactivated successfully. Nov 8 00:37:21.095434 containerd[1471]: time="2025-11-08T00:37:21.094614019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x5vbk,Uid:b339edb0-297f-4caa-90a2-1e5e9c9f0583,Namespace:calico-system,Attempt:1,}" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.045 [INFO][4668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.046 [INFO][4668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" iface="eth0" netns="/var/run/netns/cni-f5eec54d-d3dd-94ad-1ef7-3f13e6aea84b" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.048 [INFO][4668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" iface="eth0" netns="/var/run/netns/cni-f5eec54d-d3dd-94ad-1ef7-3f13e6aea84b" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.049 [INFO][4668] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" iface="eth0" netns="/var/run/netns/cni-f5eec54d-d3dd-94ad-1ef7-3f13e6aea84b" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.049 [INFO][4668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.049 [INFO][4668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.078 [INFO][4685] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.078 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.084 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.091 [WARNING][4685] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.091 [INFO][4685] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.095 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:21.101447 containerd[1471]: 2025-11-08 00:37:21.098 [INFO][4668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:21.101804 containerd[1471]: time="2025-11-08T00:37:21.101542667Z" level=info msg="TearDown network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\" successfully" Nov 8 00:37:21.104154 containerd[1471]: time="2025-11-08T00:37:21.101835737Z" level=info msg="StopPodSandbox for \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\" returns successfully" Nov 8 00:37:21.104154 containerd[1471]: time="2025-11-08T00:37:21.103798747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhv8n,Uid:50e2147d-0531-46fa-b3e7-3b3b05f008fd,Namespace:kube-system,Attempt:1,}" Nov 8 00:37:21.104251 kubelet[2549]: E1108 00:37:21.102120 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:21.106271 systemd[1]: run-netns-cni\x2df5eec54d\x2dd3dd\x2d94ad\x2d1ef7\x2d3f13e6aea84b.mount: Deactivated successfully. Nov 8 00:37:21.247207 systemd-networkd[1389]: cali3b8c955bd8d: Link UP Nov 8 00:37:21.247468 systemd-networkd[1389]: cali3b8c955bd8d: Gained carrier Nov 8 00:37:21.264308 kubelet[2549]: E1108 00:37:21.264285 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:21.266168 kubelet[2549]: E1108 00:37:21.266144 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.158 [INFO][4698] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.169 [INFO][4698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0 goldmane-666569f655- calico-system b339edb0-297f-4caa-90a2-1e5e9c9f0583 1023 0 2025-11-08 00:36:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-57-24 goldmane-666569f655-x5vbk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3b8c955bd8d [] [] }} ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.170 [INFO][4698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.204 [INFO][4724] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" HandleID="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.204 [INFO][4724] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" HandleID="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-24", "pod":"goldmane-666569f655-x5vbk", "timestamp":"2025-11-08 00:37:21.204504914 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.204 [INFO][4724] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.204 [INFO][4724] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.204 [INFO][4724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.212 [INFO][4724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.216 [INFO][4724] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.220 [INFO][4724] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.222 [INFO][4724] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.224 [INFO][4724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.224 [INFO][4724] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.226 [INFO][4724] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730 Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.230 [INFO][4724] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.236 [INFO][4724] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.7/26] block=192.168.114.0/26 handle="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.236 [INFO][4724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.7/26] handle="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" host="172-239-57-24" Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.236 [INFO][4724] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:21.267856 containerd[1471]: 2025-11-08 00:37:21.236 [INFO][4724] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.7/26] IPv6=[] ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" HandleID="k8s-pod-network.fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.268554 containerd[1471]: 2025-11-08 00:37:21.239 [INFO][4698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b339edb0-297f-4caa-90a2-1e5e9c9f0583", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"goldmane-666569f655-x5vbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b8c955bd8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:21.268554 containerd[1471]: 2025-11-08 00:37:21.239 [INFO][4698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.7/32] ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.268554 containerd[1471]: 2025-11-08 00:37:21.239 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b8c955bd8d ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.268554 containerd[1471]: 2025-11-08 00:37:21.246 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.268554 containerd[1471]: 2025-11-08 00:37:21.247 [INFO][4698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b339edb0-297f-4caa-90a2-1e5e9c9f0583", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730", Pod:"goldmane-666569f655-x5vbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b8c955bd8d", MAC:"ea:bf:5d:68:ee:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:21.268554 containerd[1471]: 2025-11-08 00:37:21.258 [INFO][4698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730" Namespace="calico-system" Pod="goldmane-666569f655-x5vbk" WorkloadEndpoint="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:21.268970 kubelet[2549]: E1108 00:37:21.267818 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:21.292738 containerd[1471]: time="2025-11-08T00:37:21.292648865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:21.292812 containerd[1471]: time="2025-11-08T00:37:21.292724675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:21.292812 containerd[1471]: time="2025-11-08T00:37:21.292740435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:21.297749 containerd[1471]: time="2025-11-08T00:37:21.297608713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:21.317546 systemd-networkd[1389]: calicc54c87f8e9: Gained IPv6LL Nov 8 00:37:21.326912 systemd[1]: Started cri-containerd-fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730.scope - libcontainer container fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730. Nov 8 00:37:21.381460 systemd-networkd[1389]: cali73423ff2226: Gained IPv6LL Nov 8 00:37:21.399229 systemd-networkd[1389]: cali5a9a7046a65: Link UP Nov 8 00:37:21.399502 systemd-networkd[1389]: cali5a9a7046a65: Gained carrier Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.166 [INFO][4710] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.178 [INFO][4710] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0 coredns-674b8bbfcf- kube-system 50e2147d-0531-46fa-b3e7-3b3b05f008fd 1024 0 2025-11-08 00:36:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-57-24 coredns-674b8bbfcf-fhv8n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a9a7046a65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.178 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.215 [INFO][4729] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" HandleID="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.215 [INFO][4729] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" HandleID="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d53b0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-57-24", "pod":"coredns-674b8bbfcf-fhv8n", "timestamp":"2025-11-08 00:37:21.215258252 +0000 UTC"}, Hostname:"172-239-57-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.215 [INFO][4729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.238 [INFO][4729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.238 [INFO][4729] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-24' Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.335 [INFO][4729] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.342 [INFO][4729] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.354 [INFO][4729] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.357 [INFO][4729] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.360 [INFO][4729] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.360 [INFO][4729] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.362 [INFO][4729] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32 Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.378 [INFO][4729] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.386 [INFO][4729] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.8/26] block=192.168.114.0/26 handle="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.386 [INFO][4729] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.8/26] handle="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" host="172-239-57-24" Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.386 [INFO][4729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:21.427372 containerd[1471]: 2025-11-08 00:37:21.386 [INFO][4729] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.8/26] IPv6=[] ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" HandleID="k8s-pod-network.cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.427834 containerd[1471]: 2025-11-08 00:37:21.390 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"50e2147d-0531-46fa-b3e7-3b3b05f008fd", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"", Pod:"coredns-674b8bbfcf-fhv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a9a7046a65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:21.427834 containerd[1471]: 2025-11-08 00:37:21.390 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.8/32] ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.427834 containerd[1471]: 2025-11-08 00:37:21.390 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a9a7046a65 ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.427834 containerd[1471]: 2025-11-08 00:37:21.398 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.427834 containerd[1471]: 2025-11-08 00:37:21.399 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"50e2147d-0531-46fa-b3e7-3b3b05f008fd", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32", Pod:"coredns-674b8bbfcf-fhv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a9a7046a65", MAC:"66:3a:be:5e:c6:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:21.427834 containerd[1471]: 2025-11-08 00:37:21.422 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhv8n" WorkloadEndpoint="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:21.453591 containerd[1471]: time="2025-11-08T00:37:21.453481349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:37:21.453667 containerd[1471]: time="2025-11-08T00:37:21.453618618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:37:21.453734 containerd[1471]: time="2025-11-08T00:37:21.453700218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:21.453913 containerd[1471]: time="2025-11-08T00:37:21.453878489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:37:21.505484 systemd[1]: Started cri-containerd-cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32.scope - libcontainer container cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32. Nov 8 00:37:21.517545 containerd[1471]: time="2025-11-08T00:37:21.517514934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x5vbk,Uid:b339edb0-297f-4caa-90a2-1e5e9c9f0583,Namespace:calico-system,Attempt:1,} returns sandbox id \"fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730\"" Nov 8 00:37:21.523038 containerd[1471]: time="2025-11-08T00:37:21.522999733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:37:21.565384 containerd[1471]: time="2025-11-08T00:37:21.565346124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhv8n,Uid:50e2147d-0531-46fa-b3e7-3b3b05f008fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32\"" Nov 8 00:37:21.567124 kubelet[2549]: E1108 00:37:21.567089 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:21.576729 containerd[1471]: time="2025-11-08T00:37:21.576693431Z" level=info msg="CreateContainer within sandbox \"cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:37:21.586884 containerd[1471]: time="2025-11-08T00:37:21.586839309Z" level=info msg="CreateContainer within sandbox \"cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dadc37bbadfd6b90e0fa1277221fca20d8ced430ad79f0681df8b228b32430c7\"" Nov 8 00:37:21.589019 containerd[1471]: time="2025-11-08T00:37:21.587929899Z" level=info msg="StartContainer for \"dadc37bbadfd6b90e0fa1277221fca20d8ced430ad79f0681df8b228b32430c7\"" Nov 8 00:37:21.623471 systemd[1]: Started cri-containerd-dadc37bbadfd6b90e0fa1277221fca20d8ced430ad79f0681df8b228b32430c7.scope - libcontainer container dadc37bbadfd6b90e0fa1277221fca20d8ced430ad79f0681df8b228b32430c7. Nov 8 00:37:21.649792 containerd[1471]: time="2025-11-08T00:37:21.649768565Z" level=info msg="StartContainer for \"dadc37bbadfd6b90e0fa1277221fca20d8ced430ad79f0681df8b228b32430c7\" returns successfully" Nov 8 00:37:21.653512 containerd[1471]: time="2025-11-08T00:37:21.653467084Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:21.654985 containerd[1471]: time="2025-11-08T00:37:21.654945454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:37:21.655345 containerd[1471]: time="2025-11-08T00:37:21.655061633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:21.655412 kubelet[2549]: E1108 00:37:21.655373 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:37:21.655449 kubelet[2549]: E1108 00:37:21.655428 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:37:21.655756 kubelet[2549]: E1108 00:37:21.655702 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcdpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x5vbk_calico-system(b339edb0-297f-4caa-90a2-1e5e9c9f0583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:21.656883 kubelet[2549]: E1108 00:37:21.656855 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:37:22.269028 kubelet[2549]: E1108 00:37:22.268993 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:22.271277 kubelet[2549]: E1108 00:37:22.271214 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:22.271879 kubelet[2549]: E1108 00:37:22.271728 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:37:22.277344 kubelet[2549]: E1108 00:37:22.275370 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:22.287123 kubelet[2549]: I1108 00:37:22.286896 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fhv8n" podStartSLOduration=35.286882674 podStartE2EDuration="35.286882674s" podCreationTimestamp="2025-11-08 00:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:37:22.286039294 +0000 UTC m=+42.408315375" watchObservedRunningTime="2025-11-08 00:37:22.286882674 +0000 UTC m=+42.409158725" Nov 8 00:37:22.405556 systemd-networkd[1389]: cali3b8c955bd8d: Gained IPv6LL Nov 8 00:37:22.533568 systemd-networkd[1389]: cali5a9a7046a65: Gained IPv6LL Nov 8 00:37:22.974430 containerd[1471]: time="2025-11-08T00:37:22.973848795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:37:23.106255 containerd[1471]: time="2025-11-08T00:37:23.106062450Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:23.107343 containerd[1471]: time="2025-11-08T00:37:23.107231231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:37:23.107343 containerd[1471]: time="2025-11-08T00:37:23.107276831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:37:23.107593 kubelet[2549]: E1108 00:37:23.107546 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:37:23.107650 kubelet[2549]: E1108 00:37:23.107600 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:37:23.107797 kubelet[2549]: E1108 00:37:23.107754 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e094445afe4d4a0db1ebe2df45d03ef3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:23.111419 containerd[1471]: time="2025-11-08T00:37:23.111397820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:37:23.244572 containerd[1471]: time="2025-11-08T00:37:23.244310888Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:23.245586 containerd[1471]: time="2025-11-08T00:37:23.245519968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:37:23.245707 containerd[1471]: time="2025-11-08T00:37:23.245638568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:37:23.245890 kubelet[2549]: E1108 00:37:23.245844 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:37:23.246241 kubelet[2549]: E1108 00:37:23.245901 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:37:23.246241 kubelet[2549]: E1108 00:37:23.246025 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:23.247576 kubelet[2549]: E1108 00:37:23.247525 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:37:23.273292 kubelet[2549]: E1108 00:37:23.273245 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:23.275660 kubelet[2549]: E1108 00:37:23.275534 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:37:24.278470 kubelet[2549]: E1108 00:37:24.278345 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:27.773809 kubelet[2549]: I1108 00:37:27.772555 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:37:27.773809 kubelet[2549]: E1108 00:37:27.772990 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:28.288569 kubelet[2549]: E1108 00:37:28.288520 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:28.351282 kernel: bpftool[5043]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:37:28.652543 systemd-networkd[1389]: vxlan.calico: Link UP Nov 8 00:37:28.652556 systemd-networkd[1389]: vxlan.calico: Gained carrier Nov 8 00:37:30.726491 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Nov 8 00:37:32.973118 containerd[1471]: time="2025-11-08T00:37:32.972711697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:37:33.105095 containerd[1471]: time="2025-11-08T00:37:33.105032299Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:33.106381 containerd[1471]: time="2025-11-08T00:37:33.106310210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:37:33.106472 containerd[1471]: time="2025-11-08T00:37:33.106426689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:37:33.106643 kubelet[2549]: E1108 00:37:33.106598 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:37:33.107099 kubelet[2549]: E1108 00:37:33.106650 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:37:33.107099 kubelet[2549]: E1108 00:37:33.106785 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pph6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-769c89c5c9-znhjq_calico-system(9f16a8b2-c22c-42c4-a0b9-731351a537c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:33.108391 kubelet[2549]: E1108 00:37:33.108295 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:37:33.973248 containerd[1471]: time="2025-11-08T00:37:33.973022441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:34.116446 containerd[1471]: time="2025-11-08T00:37:34.116351598Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:34.117691 containerd[1471]: time="2025-11-08T00:37:34.117631790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:34.118139 containerd[1471]: time="2025-11-08T00:37:34.117674870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:34.118192 kubelet[2549]: E1108 00:37:34.117924 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:34.118192 kubelet[2549]: E1108 00:37:34.117970 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:34.118602 kubelet[2549]: E1108 00:37:34.118239 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jf2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-r4b4p_calico-apiserver(5e48afab-b056-4d85-9cc7-4c4bf819b790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:34.119798 containerd[1471]: time="2025-11-08T00:37:34.118935301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:34.119966 kubelet[2549]: E1108 00:37:34.119357 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:34.253613 containerd[1471]: time="2025-11-08T00:37:34.253445123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:34.254850 containerd[1471]: time="2025-11-08T00:37:34.254757094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:34.255007 containerd[1471]: time="2025-11-08T00:37:34.254763364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:34.255118 kubelet[2549]: E1108 00:37:34.255078 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:34.255183 kubelet[2549]: E1108 00:37:34.255135 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:34.255314 kubelet[2549]: E1108 00:37:34.255272 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j488h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-jlfgk_calico-apiserver(b7ec371f-050b-4208-a3ac-8f708d9ed8b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:34.256756 kubelet[2549]: E1108 00:37:34.256728 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:34.973238 containerd[1471]: time="2025-11-08T00:37:34.973167406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:37:35.116804 containerd[1471]: time="2025-11-08T00:37:35.116730831Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:35.117932 containerd[1471]: time="2025-11-08T00:37:35.117887311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:37:35.118080 containerd[1471]: time="2025-11-08T00:37:35.117964021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:37:35.118291 kubelet[2549]: E1108 00:37:35.118248 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:35.118710 kubelet[2549]: E1108 00:37:35.118303 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:35.118710 kubelet[2549]: E1108 00:37:35.118565 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:35.120134 containerd[1471]: time="2025-11-08T00:37:35.119061342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:37:35.246867 containerd[1471]: time="2025-11-08T00:37:35.246688914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:35.248302 containerd[1471]: time="2025-11-08T00:37:35.248213826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:37:35.248462 containerd[1471]: time="2025-11-08T00:37:35.248342866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:35.248631 kubelet[2549]: E1108 00:37:35.248571 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:37:35.248708 kubelet[2549]: E1108 00:37:35.248650 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:37:35.248920 kubelet[2549]: E1108 00:37:35.248869 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcdpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x5vbk_calico-system(b339edb0-297f-4caa-90a2-1e5e9c9f0583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:35.249682 containerd[1471]: time="2025-11-08T00:37:35.249648087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:37:35.250046 kubelet[2549]: E1108 00:37:35.249999 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:37:35.383921 containerd[1471]: time="2025-11-08T00:37:35.383861914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:35.384935 containerd[1471]: time="2025-11-08T00:37:35.384822005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:37:35.384935 containerd[1471]: time="2025-11-08T00:37:35.384851885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:37:35.385037 kubelet[2549]: E1108 00:37:35.384973 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:35.385037 kubelet[2549]: E1108 00:37:35.385016 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:35.385216 kubelet[2549]: E1108 00:37:35.385162 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:35.386700 kubelet[2549]: E1108 00:37:35.386657 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:35.974941 kubelet[2549]: E1108 00:37:35.974824 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:37:39.968171 containerd[1471]: time="2025-11-08T00:37:39.968031996Z" level=info msg="StopPodSandbox for \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\"" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.004 [WARNING][5171] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e48afab-b056-4d85-9cc7-4c4bf819b790", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e", Pod:"calico-apiserver-5cc866c96c-r4b4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali157ad780608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.004 [INFO][5171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.004 [INFO][5171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" iface="eth0" netns="" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.004 [INFO][5171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.004 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.029 [INFO][5180] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.029 [INFO][5180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.029 [INFO][5180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.036 [WARNING][5180] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.036 [INFO][5180] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.038 [INFO][5180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.043661 containerd[1471]: 2025-11-08 00:37:40.041 [INFO][5171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.043661 containerd[1471]: time="2025-11-08T00:37:40.043581810Z" level=info msg="TearDown network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\" successfully" Nov 8 00:37:40.043661 containerd[1471]: time="2025-11-08T00:37:40.043600880Z" level=info msg="StopPodSandbox for \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\" returns successfully" Nov 8 00:37:40.044112 containerd[1471]: time="2025-11-08T00:37:40.044074840Z" level=info msg="RemovePodSandbox for \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\"" Nov 8 00:37:40.044147 containerd[1471]: time="2025-11-08T00:37:40.044114310Z" level=info msg="Forcibly stopping sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\"" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.077 [WARNING][5194] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e48afab-b056-4d85-9cc7-4c4bf819b790", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"3987a3c6d45c9f4bb20de45e5cb9525ef53804d716e95943a6934eddb74f0d1e", Pod:"calico-apiserver-5cc866c96c-r4b4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali157ad780608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.078 [INFO][5194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.078 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" iface="eth0" netns="" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.078 [INFO][5194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.078 [INFO][5194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.102 [INFO][5202] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.102 [INFO][5202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.102 [INFO][5202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.108 [WARNING][5202] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.108 [INFO][5202] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" HandleID="k8s-pod-network.b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--r4b4p-eth0" Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.109 [INFO][5202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.114033 containerd[1471]: 2025-11-08 00:37:40.111 [INFO][5194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1" Nov 8 00:37:40.114546 containerd[1471]: time="2025-11-08T00:37:40.114070940Z" level=info msg="TearDown network for sandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\" successfully" Nov 8 00:37:40.117983 containerd[1471]: time="2025-11-08T00:37:40.117940064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:40.118065 containerd[1471]: time="2025-11-08T00:37:40.117984374Z" level=info msg="RemovePodSandbox \"b4d50b9e9f2e72ccb7f735b350d32eb8e70553381519a3dfb113950a1fc5fcd1\" returns successfully" Nov 8 00:37:40.118579 containerd[1471]: time="2025-11-08T00:37:40.118559535Z" level=info msg="StopPodSandbox for \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\"" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.152 [WARNING][5216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f5c159c-1100-4d8d-b4a2-0811154f10ae", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3", Pod:"coredns-674b8bbfcf-gsg4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc54c87f8e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.153 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.153 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" iface="eth0" netns="" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.153 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.153 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.186 [INFO][5225] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.186 [INFO][5225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.186 [INFO][5225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.194 [WARNING][5225] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.194 [INFO][5225] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.196 [INFO][5225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.201940 containerd[1471]: 2025-11-08 00:37:40.199 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.202656 containerd[1471]: time="2025-11-08T00:37:40.201992198Z" level=info msg="TearDown network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\" successfully" Nov 8 00:37:40.202656 containerd[1471]: time="2025-11-08T00:37:40.202015638Z" level=info msg="StopPodSandbox for \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\" returns successfully" Nov 8 00:37:40.202656 containerd[1471]: time="2025-11-08T00:37:40.202602928Z" level=info msg="RemovePodSandbox for \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\"" Nov 8 00:37:40.202656 containerd[1471]: time="2025-11-08T00:37:40.202651918Z" level=info msg="Forcibly stopping sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\"" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.240 [WARNING][5240] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f5c159c-1100-4d8d-b4a2-0811154f10ae", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"23a270d8d84f492e617ef0a837a138818b24b2ce033b5a73a54d3ad90e3122c3", Pod:"coredns-674b8bbfcf-gsg4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc54c87f8e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.240 [INFO][5240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.240 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" iface="eth0" netns="" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.240 [INFO][5240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.240 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.272 [INFO][5262] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.272 [INFO][5262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.272 [INFO][5262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.278 [WARNING][5262] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.278 [INFO][5262] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" HandleID="k8s-pod-network.e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--gsg4q-eth0" Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.280 [INFO][5262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.286429 containerd[1471]: 2025-11-08 00:37:40.283 [INFO][5240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764" Nov 8 00:37:40.286429 containerd[1471]: time="2025-11-08T00:37:40.286220721Z" level=info msg="TearDown network for sandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\" successfully" Nov 8 00:37:40.291637 containerd[1471]: time="2025-11-08T00:37:40.291593456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:40.291710 containerd[1471]: time="2025-11-08T00:37:40.291640366Z" level=info msg="RemovePodSandbox \"e59ed0fa1bf33d227ca30ef7c1d42b204de6d3ad596921908d166daa58d93764\" returns successfully" Nov 8 00:37:40.293485 containerd[1471]: time="2025-11-08T00:37:40.292688938Z" level=info msg="StopPodSandbox for \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\"" Nov 8 00:37:40.336028 kubelet[2549]: E1108 00:37:40.335869 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.345 [WARNING][5284] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-csi--node--driver--w66pk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5ed425e-ae3a-4fee-9b79-13f79eee03b3", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4", Pod:"csi-node-driver-w66pk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73423ff2226", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.345 [INFO][5284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.345 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" iface="eth0" netns="" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.345 [INFO][5284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.345 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.377 [INFO][5291] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.377 [INFO][5291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.377 [INFO][5291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.384 [WARNING][5291] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.384 [INFO][5291] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.386 [INFO][5291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.391768 containerd[1471]: 2025-11-08 00:37:40.388 [INFO][5284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.393240 containerd[1471]: time="2025-11-08T00:37:40.391792657Z" level=info msg="TearDown network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\" successfully" Nov 8 00:37:40.393240 containerd[1471]: time="2025-11-08T00:37:40.391818437Z" level=info msg="StopPodSandbox for \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\" returns successfully" Nov 8 00:37:40.393240 containerd[1471]: time="2025-11-08T00:37:40.392610117Z" level=info msg="RemovePodSandbox for \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\"" Nov 8 00:37:40.393240 containerd[1471]: time="2025-11-08T00:37:40.392636487Z" level=info msg="Forcibly stopping sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\"" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.424 [WARNING][5305] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-csi--node--driver--w66pk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5ed425e-ae3a-4fee-9b79-13f79eee03b3", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"fd92b8654296b2719e36d01f0668395d5b705054ac40503039b14086d8efb6f4", Pod:"csi-node-driver-w66pk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73423ff2226", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.424 [INFO][5305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.424 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" iface="eth0" netns="" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.425 [INFO][5305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.425 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.450 [INFO][5312] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.451 [INFO][5312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.451 [INFO][5312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.456 [WARNING][5312] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.456 [INFO][5312] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" HandleID="k8s-pod-network.cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Workload="172--239--57--24-k8s-csi--node--driver--w66pk-eth0" Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.458 [INFO][5312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.464410 containerd[1471]: 2025-11-08 00:37:40.460 [INFO][5305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502" Nov 8 00:37:40.464410 containerd[1471]: time="2025-11-08T00:37:40.462758507Z" level=info msg="TearDown network for sandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\" successfully" Nov 8 00:37:40.466818 containerd[1471]: time="2025-11-08T00:37:40.466792991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:40.467129 containerd[1471]: time="2025-11-08T00:37:40.466836181Z" level=info msg="RemovePodSandbox \"cc7c23f061963f42302f5133879373321877617e39d3cf349dad52e923233502\" returns successfully" Nov 8 00:37:40.467724 containerd[1471]: time="2025-11-08T00:37:40.467456481Z" level=info msg="StopPodSandbox for \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\"" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.501 [WARNING][5326] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0", GenerateName:"calico-kube-controllers-769c89c5c9-", Namespace:"calico-system", SelfLink:"", UID:"9f16a8b2-c22c-42c4-a0b9-731351a537c7", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769c89c5c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654", Pod:"calico-kube-controllers-769c89c5c9-znhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d1cef27536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.501 [INFO][5326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.501 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" iface="eth0" netns="" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.501 [INFO][5326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.501 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.531 [INFO][5334] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.532 [INFO][5334] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.532 [INFO][5334] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.539 [WARNING][5334] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.539 [INFO][5334] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.541 [INFO][5334] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.547741 containerd[1471]: 2025-11-08 00:37:40.545 [INFO][5326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.547741 containerd[1471]: time="2025-11-08T00:37:40.547590701Z" level=info msg="TearDown network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\" successfully" Nov 8 00:37:40.547741 containerd[1471]: time="2025-11-08T00:37:40.547616771Z" level=info msg="StopPodSandbox for \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\" returns successfully" Nov 8 00:37:40.549819 containerd[1471]: time="2025-11-08T00:37:40.548576963Z" level=info msg="RemovePodSandbox for \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\"" Nov 8 00:37:40.549819 containerd[1471]: time="2025-11-08T00:37:40.548604153Z" level=info msg="Forcibly stopping sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\"" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.586 [WARNING][5348] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0", GenerateName:"calico-kube-controllers-769c89c5c9-", Namespace:"calico-system", SelfLink:"", UID:"9f16a8b2-c22c-42c4-a0b9-731351a537c7", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769c89c5c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"d0bcf08fb98a0fd8f2fce41461705452073598c9667614fd19e22202536c8654", Pod:"calico-kube-controllers-769c89c5c9-znhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0d1cef27536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.586 [INFO][5348] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.586 [INFO][5348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" iface="eth0" netns="" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.586 [INFO][5348] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.586 [INFO][5348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.608 [INFO][5355] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.608 [INFO][5355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.608 [INFO][5355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.614 [WARNING][5355] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.614 [INFO][5355] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" HandleID="k8s-pod-network.6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Workload="172--239--57--24-k8s-calico--kube--controllers--769c89c5c9--znhjq-eth0" Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.615 [INFO][5355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.621084 containerd[1471]: 2025-11-08 00:37:40.617 [INFO][5348] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b" Nov 8 00:37:40.621655 containerd[1471]: time="2025-11-08T00:37:40.621622845Z" level=info msg="TearDown network for sandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\" successfully" Nov 8 00:37:40.625017 containerd[1471]: time="2025-11-08T00:37:40.624984048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:40.625064 containerd[1471]: time="2025-11-08T00:37:40.625030308Z" level=info msg="RemovePodSandbox \"6d0ce0b32150983dee21c664ca8f4db61f814192276f7ccc0970cbfef36b808b\" returns successfully" Nov 8 00:37:40.625784 containerd[1471]: time="2025-11-08T00:37:40.625542519Z" level=info msg="StopPodSandbox for \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\"" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.658 [WARNING][5370] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"50e2147d-0531-46fa-b3e7-3b3b05f008fd", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32", Pod:"coredns-674b8bbfcf-fhv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a9a7046a65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.659 [INFO][5370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.659 [INFO][5370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" iface="eth0" netns="" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.659 [INFO][5370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.659 [INFO][5370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.682 [INFO][5377] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.682 [INFO][5377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.682 [INFO][5377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.688 [WARNING][5377] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.688 [INFO][5377] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.689 [INFO][5377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.695049 containerd[1471]: 2025-11-08 00:37:40.692 [INFO][5370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.696185 containerd[1471]: time="2025-11-08T00:37:40.695076238Z" level=info msg="TearDown network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\" successfully" Nov 8 00:37:40.696185 containerd[1471]: time="2025-11-08T00:37:40.695101778Z" level=info msg="StopPodSandbox for \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\" returns successfully" Nov 8 00:37:40.696955 containerd[1471]: time="2025-11-08T00:37:40.696694020Z" level=info msg="RemovePodSandbox for \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\"" Nov 8 00:37:40.696955 containerd[1471]: time="2025-11-08T00:37:40.696720880Z" level=info msg="Forcibly stopping sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\"" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.733 [WARNING][5391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"50e2147d-0531-46fa-b3e7-3b3b05f008fd", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"cd4534d4b40b26a87575b81ad49119e0ffc1fc63d53cd84786b62aecfef3ed32", Pod:"coredns-674b8bbfcf-fhv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a9a7046a65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.733 [INFO][5391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.733 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" iface="eth0" netns="" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.733 [INFO][5391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.733 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.755 [INFO][5398] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.756 [INFO][5398] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.756 [INFO][5398] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.763 [WARNING][5398] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.763 [INFO][5398] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" HandleID="k8s-pod-network.0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Workload="172--239--57--24-k8s-coredns--674b8bbfcf--fhv8n-eth0" Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.764 [INFO][5398] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.770859 containerd[1471]: 2025-11-08 00:37:40.767 [INFO][5391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff" Nov 8 00:37:40.771402 containerd[1471]: time="2025-11-08T00:37:40.770913053Z" level=info msg="TearDown network for sandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\" successfully" Nov 8 00:37:40.775225 containerd[1471]: time="2025-11-08T00:37:40.774965487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:40.775225 containerd[1471]: time="2025-11-08T00:37:40.775209168Z" level=info msg="RemovePodSandbox \"0191678700f20b2b5ae54977a258e70a19def7b08a174bef86db802f01e5f6ff\" returns successfully" Nov 8 00:37:40.775900 containerd[1471]: time="2025-11-08T00:37:40.775873849Z" level=info msg="StopPodSandbox for \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\"" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.819 [WARNING][5412] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b339edb0-297f-4caa-90a2-1e5e9c9f0583", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730", Pod:"goldmane-666569f655-x5vbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b8c955bd8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.820 [INFO][5412] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.820 [INFO][5412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" iface="eth0" netns="" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.820 [INFO][5412] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.820 [INFO][5412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.840 [INFO][5419] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.840 [INFO][5419] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.841 [INFO][5419] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.847 [WARNING][5419] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.847 [INFO][5419] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.849 [INFO][5419] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.853586 containerd[1471]: 2025-11-08 00:37:40.851 [INFO][5412] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.853586 containerd[1471]: time="2025-11-08T00:37:40.853550235Z" level=info msg="TearDown network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\" successfully" Nov 8 00:37:40.853586 containerd[1471]: time="2025-11-08T00:37:40.853574945Z" level=info msg="StopPodSandbox for \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\" returns successfully" Nov 8 00:37:40.855226 containerd[1471]: time="2025-11-08T00:37:40.855139168Z" level=info msg="RemovePodSandbox for \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\"" Nov 8 00:37:40.855226 containerd[1471]: time="2025-11-08T00:37:40.855168838Z" level=info msg="Forcibly stopping sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\"" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.887 [WARNING][5435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b339edb0-297f-4caa-90a2-1e5e9c9f0583", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"fa9b5eb400692050cb73359a017442a616f7699bff197d12a212b3103e2e4730", Pod:"goldmane-666569f655-x5vbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b8c955bd8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.887 [INFO][5435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.887 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" iface="eth0" netns="" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.887 [INFO][5435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.887 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.906 [INFO][5442] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.907 [INFO][5442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.907 [INFO][5442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.912 [WARNING][5442] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.912 [INFO][5442] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" HandleID="k8s-pod-network.13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Workload="172--239--57--24-k8s-goldmane--666569f655--x5vbk-eth0" Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.914 [INFO][5442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.918430 containerd[1471]: 2025-11-08 00:37:40.916 [INFO][5435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf" Nov 8 00:37:40.918944 containerd[1471]: time="2025-11-08T00:37:40.918911300Z" level=info msg="TearDown network for sandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\" successfully" Nov 8 00:37:40.922207 containerd[1471]: time="2025-11-08T00:37:40.922178694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:40.922280 containerd[1471]: time="2025-11-08T00:37:40.922227134Z" level=info msg="RemovePodSandbox \"13d6b37ef4e09cfabad98d6e973da58fde0413cc2247364bfbe2f14717d45bbf\" returns successfully" Nov 8 00:37:40.922783 containerd[1471]: time="2025-11-08T00:37:40.922747224Z" level=info msg="StopPodSandbox for \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\"" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.955 [WARNING][5456] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" WorkloadEndpoint="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.955 [INFO][5456] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.955 [INFO][5456] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" iface="eth0" netns="" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.955 [INFO][5456] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.955 [INFO][5456] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.978 [INFO][5463] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.978 [INFO][5463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.979 [INFO][5463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.984 [WARNING][5463] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.984 [INFO][5463] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.985 [INFO][5463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:40.991035 containerd[1471]: 2025-11-08 00:37:40.987 [INFO][5456] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:40.991889 containerd[1471]: time="2025-11-08T00:37:40.991065362Z" level=info msg="TearDown network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\" successfully" Nov 8 00:37:40.991889 containerd[1471]: time="2025-11-08T00:37:40.991087372Z" level=info msg="StopPodSandbox for \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\" returns successfully" Nov 8 00:37:40.991889 containerd[1471]: time="2025-11-08T00:37:40.991414523Z" level=info msg="RemovePodSandbox for \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\"" Nov 8 00:37:40.991889 containerd[1471]: time="2025-11-08T00:37:40.991449863Z" level=info msg="Forcibly stopping sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\"" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.020 [WARNING][5477] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" WorkloadEndpoint="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.020 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.020 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" iface="eth0" netns="" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.021 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.021 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.041 [INFO][5485] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.041 [INFO][5485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.041 [INFO][5485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.047 [WARNING][5485] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.047 [INFO][5485] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" HandleID="k8s-pod-network.16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Workload="172--239--57--24-k8s-whisker--7579885fb4--6qtb5-eth0" Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.049 [INFO][5485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:41.056351 containerd[1471]: 2025-11-08 00:37:41.051 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1" Nov 8 00:37:41.056351 containerd[1471]: time="2025-11-08T00:37:41.054757677Z" level=info msg="TearDown network for sandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\" successfully" Nov 8 00:37:41.058572 containerd[1471]: time="2025-11-08T00:37:41.058549632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:41.058673 containerd[1471]: time="2025-11-08T00:37:41.058653162Z" level=info msg="RemovePodSandbox \"16073a7e9d8d8ffa59f86e27e7dbaa59f0201e88abbe83aa238b58da14b56ff1\" returns successfully" Nov 8 00:37:41.059276 containerd[1471]: time="2025-11-08T00:37:41.059236452Z" level=info msg="StopPodSandbox for \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\"" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.089 [WARNING][5499] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7ec371f-050b-4208-a3ac-8f708d9ed8b9", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be", Pod:"calico-apiserver-5cc866c96c-jlfgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali199d41bb827", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.089 [INFO][5499] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.089 [INFO][5499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" iface="eth0" netns="" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.089 [INFO][5499] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.089 [INFO][5499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.111 [INFO][5506] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.111 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.111 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.116 [WARNING][5506] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.116 [INFO][5506] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.117 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:41.122354 containerd[1471]: 2025-11-08 00:37:41.120 [INFO][5499] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.122354 containerd[1471]: time="2025-11-08T00:37:41.122312126Z" level=info msg="TearDown network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\" successfully" Nov 8 00:37:41.122354 containerd[1471]: time="2025-11-08T00:37:41.122352736Z" level=info msg="StopPodSandbox for \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\" returns successfully" Nov 8 00:37:41.124141 containerd[1471]: time="2025-11-08T00:37:41.124113519Z" level=info msg="RemovePodSandbox for \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\"" Nov 8 00:37:41.124216 containerd[1471]: time="2025-11-08T00:37:41.124145659Z" level=info msg="Forcibly stopping sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\"" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.163 [WARNING][5521] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0", GenerateName:"calico-apiserver-5cc866c96c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7ec371f-050b-4208-a3ac-8f708d9ed8b9", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc866c96c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-24", ContainerID:"2304883e8eb812e1c218f46a3078bb143f14cb25a6c01fa40d42c725538062be", Pod:"calico-apiserver-5cc866c96c-jlfgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali199d41bb827", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.163 [INFO][5521] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.163 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" iface="eth0" netns="" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.163 [INFO][5521] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.163 [INFO][5521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.195 [INFO][5528] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.195 [INFO][5528] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.196 [INFO][5528] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.204 [WARNING][5528] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.204 [INFO][5528] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" HandleID="k8s-pod-network.bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Workload="172--239--57--24-k8s-calico--apiserver--5cc866c96c--jlfgk-eth0" Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.206 [INFO][5528] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:37:41.212713 containerd[1471]: 2025-11-08 00:37:41.209 [INFO][5521] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1" Nov 8 00:37:41.213341 containerd[1471]: time="2025-11-08T00:37:41.212730350Z" level=info msg="TearDown network for sandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\" successfully" Nov 8 00:37:41.216605 containerd[1471]: time="2025-11-08T00:37:41.216567734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:37:41.216690 containerd[1471]: time="2025-11-08T00:37:41.216631944Z" level=info msg="RemovePodSandbox \"bbbf220bc2b82768642c52b7f315c30fbc8b4c3d7b16bd267a5089305a540fd1\" returns successfully" Nov 8 00:37:44.973279 kubelet[2549]: E1108 00:37:44.972908 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:44.973984 kubelet[2549]: E1108 00:37:44.973841 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:46.976201 kubelet[2549]: E1108 00:37:46.973537 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:37:48.979257 kubelet[2549]: E1108 00:37:48.978863 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:37:48.979762 containerd[1471]: time="2025-11-08T00:37:48.979205249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:37:49.110597 containerd[1471]: time="2025-11-08T00:37:49.110539909Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:49.111665 containerd[1471]: time="2025-11-08T00:37:49.111636519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:37:49.111791 containerd[1471]: time="2025-11-08T00:37:49.111702910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:37:49.113093 kubelet[2549]: E1108 00:37:49.113045 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:37:49.113168 kubelet[2549]: E1108 00:37:49.113095 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:37:49.113235 kubelet[2549]: E1108 00:37:49.113204 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e094445afe4d4a0db1ebe2df45d03ef3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:49.116006 containerd[1471]: time="2025-11-08T00:37:49.115985625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:37:49.256355 containerd[1471]: time="2025-11-08T00:37:49.256034545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:49.257189 containerd[1471]: time="2025-11-08T00:37:49.256988206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:37:49.257189 containerd[1471]: time="2025-11-08T00:37:49.257069317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:37:49.257413 kubelet[2549]: E1108 00:37:49.257226 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:37:49.257413 kubelet[2549]: E1108 00:37:49.257271 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:37:49.257486 kubelet[2549]: E1108 00:37:49.257393 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:49.258841 kubelet[2549]: E1108 00:37:49.258769 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:37:49.974995 kubelet[2549]: E1108 00:37:49.974945 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:37:55.973897 kubelet[2549]: E1108 00:37:55.973839 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:37:55.977467 containerd[1471]: time="2025-11-08T00:37:55.977249750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:56.116851 containerd[1471]: time="2025-11-08T00:37:56.116775483Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:56.118098 containerd[1471]: time="2025-11-08T00:37:56.118047545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:56.118369 containerd[1471]: time="2025-11-08T00:37:56.118169565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:56.118446 kubelet[2549]: E1108 00:37:56.118412 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:56.118492 kubelet[2549]: E1108 00:37:56.118459 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:56.119571 kubelet[2549]: E1108 00:37:56.118590 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jf2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-r4b4p_calico-apiserver(5e48afab-b056-4d85-9cc7-4c4bf819b790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:56.120013 kubelet[2549]: E1108 00:37:56.119823 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:37:57.976266 containerd[1471]: time="2025-11-08T00:37:57.976185549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:58.110621 containerd[1471]: time="2025-11-08T00:37:58.110555239Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:58.111583 containerd[1471]: time="2025-11-08T00:37:58.111534560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:58.111738 containerd[1471]: time="2025-11-08T00:37:58.111625230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:58.111795 kubelet[2549]: E1108 00:37:58.111752 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:58.112129 kubelet[2549]: E1108 00:37:58.111806 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:58.112129 kubelet[2549]: E1108 00:37:58.111925 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j488h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-jlfgk_calico-apiserver(b7ec371f-050b-4208-a3ac-8f708d9ed8b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:58.113751 kubelet[2549]: E1108 00:37:58.113718 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:37:59.977595 containerd[1471]: time="2025-11-08T00:37:59.977539104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:38:00.120020 containerd[1471]: time="2025-11-08T00:38:00.119821458Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:00.120833 containerd[1471]: time="2025-11-08T00:38:00.120691289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:38:00.120833 containerd[1471]: time="2025-11-08T00:38:00.120784399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:38:00.121165 kubelet[2549]: E1108 00:38:00.121091 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:38:00.121955 kubelet[2549]: E1108 00:38:00.121259 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:38:00.133898 kubelet[2549]: E1108 00:38:00.133799 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:00.137013 containerd[1471]: time="2025-11-08T00:38:00.136968651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:38:00.281368 containerd[1471]: time="2025-11-08T00:38:00.280290176Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:00.281368 containerd[1471]: time="2025-11-08T00:38:00.281335327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:38:00.281502 containerd[1471]: time="2025-11-08T00:38:00.281404417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:38:00.281667 kubelet[2549]: E1108 00:38:00.281625 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:38:00.281710 kubelet[2549]: E1108 00:38:00.281681 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:38:00.281869 kubelet[2549]: E1108 00:38:00.281820 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:00.283454 kubelet[2549]: E1108 00:38:00.283410 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:38:01.978642 containerd[1471]: time="2025-11-08T00:38:01.976906139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:38:02.113641 containerd[1471]: time="2025-11-08T00:38:02.113387132Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:02.114845 containerd[1471]: time="2025-11-08T00:38:02.114788327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:38:02.114968 containerd[1471]: time="2025-11-08T00:38:02.114907136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:38:02.116090 kubelet[2549]: E1108 00:38:02.115165 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:38:02.116090 kubelet[2549]: E1108 00:38:02.115235 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:38:02.116090 kubelet[2549]: E1108 00:38:02.115417 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pph6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-769c89c5c9-znhjq_calico-system(9f16a8b2-c22c-42c4-a0b9-731351a537c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:02.117201 kubelet[2549]: E1108 00:38:02.116611 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:38:02.973137 kubelet[2549]: E1108 00:38:02.973056 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:38:02.974042 containerd[1471]: time="2025-11-08T00:38:02.974001592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:38:03.118488 containerd[1471]: time="2025-11-08T00:38:03.118137872Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:03.120247 containerd[1471]: time="2025-11-08T00:38:03.120206021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:38:03.120247 containerd[1471]: time="2025-11-08T00:38:03.120279510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:38:03.120513 kubelet[2549]: E1108 00:38:03.120416 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:38:03.120513 kubelet[2549]: E1108 00:38:03.120484 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:38:03.120864 kubelet[2549]: E1108 00:38:03.120624 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcdpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x5vbk_calico-system(b339edb0-297f-4caa-90a2-1e5e9c9f0583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:03.122251 kubelet[2549]: E1108 00:38:03.122213 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:38:07.972815 kubelet[2549]: E1108 00:38:07.972442 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:38:10.240850 systemd[1]: run-containerd-runc-k8s.io-9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9-runc.gPtJqP.mount: Deactivated successfully. Nov 8 00:38:10.972442 kubelet[2549]: E1108 00:38:10.971606 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:38:10.974890 kubelet[2549]: E1108 00:38:10.974583 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:38:11.973533 kubelet[2549]: E1108 00:38:11.973482 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:38:13.972848 kubelet[2549]: E1108 00:38:13.971928 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:38:13.979054 kubelet[2549]: E1108 00:38:13.978874 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:38:14.973434 kubelet[2549]: E1108 00:38:14.973366 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:38:16.973376 kubelet[2549]: E1108 00:38:16.973341 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:38:16.973942 kubelet[2549]: E1108 00:38:16.973040 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:38:23.973383 kubelet[2549]: E1108 00:38:23.973034 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:38:24.973003 kubelet[2549]: E1108 00:38:24.972950 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:38:28.975299 kubelet[2549]: E1108 00:38:28.974155 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:38:28.978150 kubelet[2549]: E1108 00:38:28.977448 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:38:29.975415 kubelet[2549]: E1108 00:38:29.974297 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:38:30.972932 kubelet[2549]: E1108 00:38:30.972697 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:38:31.058877 systemd[1]: Started sshd@7-172.239.57.24:22-8.222.132.244:47162.service - OpenSSH per-connection server daemon (8.222.132.244:47162). Nov 8 00:38:32.352026 sshd[5577]: Received disconnect from 8.222.132.244 port 47162:11: Bye Bye [preauth] Nov 8 00:38:32.352026 sshd[5577]: Disconnected from authenticating user root 8.222.132.244 port 47162 [preauth] Nov 8 00:38:32.355895 systemd[1]: sshd@7-172.239.57.24:22-8.222.132.244:47162.service: Deactivated successfully. Nov 8 00:38:34.359851 kernel: hrtimer: interrupt took 6238626 ns Nov 8 00:38:35.975359 kubelet[2549]: E1108 00:38:35.974917 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:38:37.973108 kubelet[2549]: E1108 00:38:37.972655 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:38:39.972357 kubelet[2549]: E1108 00:38:39.972060 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:38:40.240585 systemd[1]: run-containerd-runc-k8s.io-9cedc3f79231a690e9555f8dc5969a5c24ccb8ec9055e177ae9ee8a6b0a444f9-runc.z2Amxp.mount: Deactivated successfully. Nov 8 00:38:40.973406 kubelet[2549]: E1108 00:38:40.973348 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:38:41.975864 containerd[1471]: time="2025-11-08T00:38:41.975619953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:38:42.104312 containerd[1471]: time="2025-11-08T00:38:42.104241538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:42.106097 containerd[1471]: time="2025-11-08T00:38:42.105465365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:38:42.106097 containerd[1471]: time="2025-11-08T00:38:42.105556615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:38:42.106242 kubelet[2549]: E1108 00:38:42.105897 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:38:42.106242 kubelet[2549]: E1108 00:38:42.105945 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:38:42.114704 kubelet[2549]: E1108 00:38:42.114618 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:42.118072 containerd[1471]: time="2025-11-08T00:38:42.117994009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:38:42.249743 containerd[1471]: time="2025-11-08T00:38:42.249357069Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:42.250956 containerd[1471]: time="2025-11-08T00:38:42.250915884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:38:42.251149 containerd[1471]: time="2025-11-08T00:38:42.251008014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:38:42.251184 kubelet[2549]: E1108 00:38:42.251123 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:38:42.251184 kubelet[2549]: E1108 00:38:42.251179 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:38:42.251757 kubelet[2549]: E1108 00:38:42.251670 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcg49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66pk_calico-system(e5ed425e-ae3a-4fee-9b79-13f79eee03b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:42.253239 kubelet[2549]: E1108 00:38:42.253188 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:38:43.741555 systemd[1]: Started sshd@8-172.239.57.24:22-147.75.109.163:51322.service - OpenSSH per-connection server daemon (147.75.109.163:51322). Nov 8 00:38:43.980843 containerd[1471]: time="2025-11-08T00:38:43.980513522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:38:44.067547 sshd[5608]: Accepted publickey for core from 147.75.109.163 port 51322 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:44.070819 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:44.078286 systemd-logind[1449]: New session 8 of user core. Nov 8 00:38:44.082517 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:38:44.122391 containerd[1471]: time="2025-11-08T00:38:44.120410631Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:44.122391 containerd[1471]: time="2025-11-08T00:38:44.122007748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:38:44.122391 containerd[1471]: time="2025-11-08T00:38:44.122116727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:38:44.122732 kubelet[2549]: E1108 00:38:44.122698 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:38:44.123121 kubelet[2549]: E1108 00:38:44.122756 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:38:44.123121 kubelet[2549]: E1108 00:38:44.122952 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e094445afe4d4a0db1ebe2df45d03ef3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:44.125502 containerd[1471]: time="2025-11-08T00:38:44.125464368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:38:44.266983 containerd[1471]: time="2025-11-08T00:38:44.266917145Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:44.268011 containerd[1471]: time="2025-11-08T00:38:44.267948482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:38:44.268190 containerd[1471]: time="2025-11-08T00:38:44.267976542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:38:44.269103 kubelet[2549]: E1108 00:38:44.268354 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:38:44.269103 kubelet[2549]: E1108 00:38:44.268418 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:38:44.269103 kubelet[2549]: E1108 00:38:44.268726 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcdpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x5vbk_calico-system(b339edb0-297f-4caa-90a2-1e5e9c9f0583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:44.269942 containerd[1471]: time="2025-11-08T00:38:44.269552298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:38:44.270188 kubelet[2549]: E1108 00:38:44.270162 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:38:44.421130 containerd[1471]: time="2025-11-08T00:38:44.420936198Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:44.421953 containerd[1471]: time="2025-11-08T00:38:44.421910275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:38:44.422196 containerd[1471]: time="2025-11-08T00:38:44.422037656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:38:44.423938 kubelet[2549]: E1108 00:38:44.422414 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:38:44.423938 kubelet[2549]: E1108 00:38:44.422491 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:38:44.423938 kubelet[2549]: E1108 00:38:44.422635 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twrhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f9666476b-6d4dx_calico-system(e93c897e-2024-4417-8017-e4980e091fbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:44.424400 kubelet[2549]: E1108 00:38:44.424371 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:38:44.434667 sshd[5608]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:44.440301 systemd[1]: sshd@8-172.239.57.24:22-147.75.109.163:51322.service: Deactivated successfully. Nov 8 00:38:44.444231 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:38:44.448926 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:38:44.450253 systemd-logind[1449]: Removed session 8. Nov 8 00:38:44.971259 kubelet[2549]: E1108 00:38:44.971202 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:38:45.973361 kubelet[2549]: E1108 00:38:45.972632 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:38:47.972526 containerd[1471]: time="2025-11-08T00:38:47.972449062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:38:48.106033 containerd[1471]: time="2025-11-08T00:38:48.105965884Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:48.107198 containerd[1471]: time="2025-11-08T00:38:48.107082181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:38:48.107198 containerd[1471]: time="2025-11-08T00:38:48.107124821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:38:48.107298 kubelet[2549]: E1108 00:38:48.107252 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:38:48.107298 kubelet[2549]: E1108 00:38:48.107287 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:38:48.108093 kubelet[2549]: E1108 00:38:48.107736 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jf2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-r4b4p_calico-apiserver(5e48afab-b056-4d85-9cc7-4c4bf819b790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:48.108922 kubelet[2549]: E1108 00:38:48.108891 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:38:49.498773 systemd[1]: Started sshd@9-172.239.57.24:22-147.75.109.163:51336.service - OpenSSH per-connection server daemon (147.75.109.163:51336). Nov 8 00:38:49.819424 sshd[5633]: Accepted publickey for core from 147.75.109.163 port 51336 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:49.821421 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:49.827399 systemd-logind[1449]: New session 9 of user core. Nov 8 00:38:49.832712 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:38:50.135216 sshd[5633]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:50.139922 systemd[1]: sshd@9-172.239.57.24:22-147.75.109.163:51336.service: Deactivated successfully. Nov 8 00:38:50.140108 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:38:50.142420 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:38:50.143267 systemd-logind[1449]: Removed session 9. Nov 8 00:38:52.972227 containerd[1471]: time="2025-11-08T00:38:52.972072452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:38:53.114502 containerd[1471]: time="2025-11-08T00:38:53.114447585Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:53.117901 containerd[1471]: time="2025-11-08T00:38:53.117753109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:38:53.117901 containerd[1471]: time="2025-11-08T00:38:53.117839439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:38:53.118004 kubelet[2549]: E1108 00:38:53.117969 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:38:53.119029 kubelet[2549]: E1108 00:38:53.118017 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:38:53.119029 kubelet[2549]: E1108 00:38:53.118157 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j488h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cc866c96c-jlfgk_calico-apiserver(b7ec371f-050b-4208-a3ac-8f708d9ed8b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:53.119618 kubelet[2549]: E1108 00:38:53.119588 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:38:54.972173 kubelet[2549]: E1108 00:38:54.972045 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:38:55.208527 systemd[1]: Started sshd@10-172.239.57.24:22-147.75.109.163:48864.service - OpenSSH per-connection server daemon (147.75.109.163:48864). Nov 8 00:38:55.557181 sshd[5647]: Accepted publickey for core from 147.75.109.163 port 48864 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:55.559202 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:55.565546 systemd-logind[1449]: New session 10 of user core. Nov 8 00:38:55.572429 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:38:55.870069 sshd[5647]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:55.873563 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:38:55.875920 systemd[1]: sshd@10-172.239.57.24:22-147.75.109.163:48864.service: Deactivated successfully. Nov 8 00:38:55.878917 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:38:55.881744 systemd-logind[1449]: Removed session 10. Nov 8 00:38:55.933576 systemd[1]: Started sshd@11-172.239.57.24:22-147.75.109.163:48866.service - OpenSSH per-connection server daemon (147.75.109.163:48866). Nov 8 00:38:55.972735 containerd[1471]: time="2025-11-08T00:38:55.972709061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:38:56.109685 containerd[1471]: time="2025-11-08T00:38:56.109634533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:38:56.110882 containerd[1471]: time="2025-11-08T00:38:56.110816941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:38:56.110882 containerd[1471]: time="2025-11-08T00:38:56.110849031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:38:56.111009 kubelet[2549]: E1108 00:38:56.110979 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:38:56.111352 kubelet[2549]: E1108 00:38:56.111018 2549 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:38:56.111352 kubelet[2549]: E1108 00:38:56.111120 2549 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pph6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-769c89c5c9-znhjq_calico-system(9f16a8b2-c22c-42c4-a0b9-731351a537c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:38:56.112525 kubelet[2549]: E1108 00:38:56.112479 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:38:56.273407 sshd[5665]: Accepted publickey for core from 147.75.109.163 port 48866 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:56.274225 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:56.280742 systemd-logind[1449]: New session 11 of user core. Nov 8 00:38:56.285462 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:38:56.619540 sshd[5665]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:56.625980 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:38:56.626456 systemd[1]: sshd@11-172.239.57.24:22-147.75.109.163:48866.service: Deactivated successfully. Nov 8 00:38:56.628477 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:38:56.629271 systemd-logind[1449]: Removed session 11. Nov 8 00:38:56.684556 systemd[1]: Started sshd@12-172.239.57.24:22-147.75.109.163:48874.service - OpenSSH per-connection server daemon (147.75.109.163:48874). Nov 8 00:38:56.971386 kubelet[2549]: E1108 00:38:56.971112 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:38:57.027607 sshd[5676]: Accepted publickey for core from 147.75.109.163 port 48874 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:38:57.029206 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:57.037898 systemd-logind[1449]: New session 12 of user core. Nov 8 00:38:57.043682 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:38:57.360138 sshd[5676]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:57.365016 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:38:57.366276 systemd[1]: sshd@12-172.239.57.24:22-147.75.109.163:48874.service: Deactivated successfully. Nov 8 00:38:57.369032 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:38:57.371094 systemd-logind[1449]: Removed session 12. Nov 8 00:38:58.974354 kubelet[2549]: E1108 00:38:58.974283 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:38:58.974785 kubelet[2549]: E1108 00:38:58.974699 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:39:02.428558 systemd[1]: Started sshd@13-172.239.57.24:22-147.75.109.163:51428.service - OpenSSH per-connection server daemon (147.75.109.163:51428). Nov 8 00:39:02.767937 sshd[5696]: Accepted publickey for core from 147.75.109.163 port 51428 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:02.771066 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:02.777962 systemd-logind[1449]: New session 13 of user core. Nov 8 00:39:02.782482 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:39:02.977945 kubelet[2549]: E1108 00:39:02.975409 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:39:03.110979 sshd[5696]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:03.118229 systemd[1]: sshd@13-172.239.57.24:22-147.75.109.163:51428.service: Deactivated successfully. Nov 8 00:39:03.124197 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:39:03.126688 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:39:03.128646 systemd-logind[1449]: Removed session 13. Nov 8 00:39:03.175508 systemd[1]: Started sshd@14-172.239.57.24:22-147.75.109.163:51440.service - OpenSSH per-connection server daemon (147.75.109.163:51440). Nov 8 00:39:03.505042 sshd[5708]: Accepted publickey for core from 147.75.109.163 port 51440 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:03.507372 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:03.515764 systemd-logind[1449]: New session 14 of user core. Nov 8 00:39:03.518454 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:39:03.976389 kubelet[2549]: E1108 00:39:03.975637 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:39:03.989894 sshd[5708]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:03.995051 systemd[1]: sshd@14-172.239.57.24:22-147.75.109.163:51440.service: Deactivated successfully. Nov 8 00:39:03.997692 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:39:03.998641 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:39:04.000456 systemd-logind[1449]: Removed session 14. Nov 8 00:39:04.058771 systemd[1]: Started sshd@15-172.239.57.24:22-147.75.109.163:51456.service - OpenSSH per-connection server daemon (147.75.109.163:51456). Nov 8 00:39:04.388191 sshd[5719]: Accepted publickey for core from 147.75.109.163 port 51456 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:04.390831 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:04.397562 systemd-logind[1449]: New session 15 of user core. Nov 8 00:39:04.404581 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:39:05.539642 sshd[5719]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:05.549376 systemd[1]: sshd@15-172.239.57.24:22-147.75.109.163:51456.service: Deactivated successfully. Nov 8 00:39:05.553781 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:39:05.556677 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:39:05.558141 systemd-logind[1449]: Removed session 15. Nov 8 00:39:05.607600 systemd[1]: Started sshd@16-172.239.57.24:22-147.75.109.163:51472.service - OpenSSH per-connection server daemon (147.75.109.163:51472). Nov 8 00:39:05.952773 sshd[5738]: Accepted publickey for core from 147.75.109.163 port 51472 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:05.954967 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:05.961851 systemd-logind[1449]: New session 16 of user core. Nov 8 00:39:05.966828 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:39:06.445189 sshd[5738]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:06.451459 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:39:06.452642 systemd[1]: sshd@16-172.239.57.24:22-147.75.109.163:51472.service: Deactivated successfully. Nov 8 00:39:06.456366 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:39:06.461653 systemd-logind[1449]: Removed session 16. Nov 8 00:39:06.512740 systemd[1]: Started sshd@17-172.239.57.24:22-147.75.109.163:51474.service - OpenSSH per-connection server daemon (147.75.109.163:51474). Nov 8 00:39:06.854805 sshd[5749]: Accepted publickey for core from 147.75.109.163 port 51474 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:06.856575 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:06.863109 systemd-logind[1449]: New session 17 of user core. Nov 8 00:39:06.870440 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:39:06.978743 kubelet[2549]: E1108 00:39:06.978426 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:39:06.981091 kubelet[2549]: E1108 00:39:06.980851 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:39:07.217306 sshd[5749]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:07.223149 systemd[1]: sshd@17-172.239.57.24:22-147.75.109.163:51474.service: Deactivated successfully. Nov 8 00:39:07.223554 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:39:07.228868 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:39:07.233077 systemd-logind[1449]: Removed session 17. Nov 8 00:39:08.972306 kubelet[2549]: E1108 00:39:08.971991 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:39:09.976635 kubelet[2549]: E1108 00:39:09.976585 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:39:10.973744 kubelet[2549]: E1108 00:39:10.973680 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:39:11.975887 kubelet[2549]: E1108 00:39:11.975767 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:39:12.280235 systemd[1]: Started sshd@18-172.239.57.24:22-147.75.109.163:41022.service - OpenSSH per-connection server daemon (147.75.109.163:41022). Nov 8 00:39:12.629009 sshd[5798]: Accepted publickey for core from 147.75.109.163 port 41022 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:12.630611 sshd[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:12.642169 systemd-logind[1449]: New session 18 of user core. Nov 8 00:39:12.647506 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:39:12.949223 sshd[5798]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:12.956942 systemd[1]: sshd@18-172.239.57.24:22-147.75.109.163:41022.service: Deactivated successfully. Nov 8 00:39:12.959830 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:39:12.961212 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:39:12.962307 systemd-logind[1449]: Removed session 18. Nov 8 00:39:16.972668 kubelet[2549]: E1108 00:39:16.972595 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:39:16.973841 kubelet[2549]: E1108 00:39:16.973634 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:39:18.019660 systemd[1]: Started sshd@19-172.239.57.24:22-147.75.109.163:41024.service - OpenSSH per-connection server daemon (147.75.109.163:41024). Nov 8 00:39:18.366597 sshd[5813]: Accepted publickey for core from 147.75.109.163 port 41024 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:18.369980 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:18.376565 systemd-logind[1449]: New session 19 of user core. Nov 8 00:39:18.384539 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:39:18.687426 sshd[5813]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:18.691043 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:39:18.691768 systemd[1]: sshd@19-172.239.57.24:22-147.75.109.163:41024.service: Deactivated successfully. Nov 8 00:39:18.694983 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:39:18.695920 systemd-logind[1449]: Removed session 19. Nov 8 00:39:19.982872 kubelet[2549]: E1108 00:39:19.982547 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:39:20.974402 kubelet[2549]: E1108 00:39:20.974058 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:39:22.972745 kubelet[2549]: E1108 00:39:22.972245 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583" Nov 8 00:39:23.758574 systemd[1]: Started sshd@20-172.239.57.24:22-147.75.109.163:45654.service - OpenSSH per-connection server daemon (147.75.109.163:45654). Nov 8 00:39:24.110745 sshd[5826]: Accepted publickey for core from 147.75.109.163 port 45654 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:24.112560 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:24.120124 systemd-logind[1449]: New session 20 of user core. Nov 8 00:39:24.126631 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:39:24.499568 sshd[5826]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:24.505301 systemd[1]: sshd@20-172.239.57.24:22-147.75.109.163:45654.service: Deactivated successfully. Nov 8 00:39:24.509415 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:39:24.512755 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:39:24.515440 systemd-logind[1449]: Removed session 20. Nov 8 00:39:24.973551 kubelet[2549]: E1108 00:39:24.973468 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f9666476b-6d4dx" podUID="e93c897e-2024-4417-8017-e4980e091fbc" Nov 8 00:39:25.975009 kubelet[2549]: E1108 00:39:25.973190 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:39:28.972650 kubelet[2549]: E1108 00:39:28.972146 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-jlfgk" podUID="b7ec371f-050b-4208-a3ac-8f708d9ed8b9" Nov 8 00:39:29.562540 systemd[1]: Started sshd@21-172.239.57.24:22-147.75.109.163:45664.service - OpenSSH per-connection server daemon (147.75.109.163:45664). Nov 8 00:39:29.907125 sshd[5839]: Accepted publickey for core from 147.75.109.163 port 45664 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:29.908621 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:29.913979 systemd-logind[1449]: New session 21 of user core. Nov 8 00:39:29.918449 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:39:29.973402 kubelet[2549]: E1108 00:39:29.973377 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 8 00:39:30.231828 sshd[5839]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:30.236281 systemd[1]: sshd@21-172.239.57.24:22-147.75.109.163:45664.service: Deactivated successfully. Nov 8 00:39:30.239054 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:39:30.241206 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:39:30.242820 systemd-logind[1449]: Removed session 21. Nov 8 00:39:30.973040 kubelet[2549]: E1108 00:39:30.972969 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cc866c96c-r4b4p" podUID="5e48afab-b056-4d85-9cc7-4c4bf819b790" Nov 8 00:39:33.981553 kubelet[2549]: E1108 00:39:33.981481 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-769c89c5c9-znhjq" podUID="9f16a8b2-c22c-42c4-a0b9-731351a537c7" Nov 8 00:39:33.986377 kubelet[2549]: E1108 00:39:33.982631 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66pk" podUID="e5ed425e-ae3a-4fee-9b79-13f79eee03b3" Nov 8 00:39:35.298547 systemd[1]: Started sshd@22-172.239.57.24:22-147.75.109.163:33312.service - OpenSSH per-connection server daemon (147.75.109.163:33312). Nov 8 00:39:35.637954 sshd[5852]: Accepted publickey for core from 147.75.109.163 port 33312 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:39:35.640606 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:35.647221 systemd-logind[1449]: New session 22 of user core. Nov 8 00:39:35.653494 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:39:35.955879 sshd[5852]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:35.959896 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:39:35.960610 systemd[1]: sshd@22-172.239.57.24:22-147.75.109.163:33312.service: Deactivated successfully. Nov 8 00:39:35.962880 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:39:35.964162 systemd-logind[1449]: Removed session 22. Nov 8 00:39:37.974110 kubelet[2549]: E1108 00:39:37.974041 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x5vbk" podUID="b339edb0-297f-4caa-90a2-1e5e9c9f0583"