Nov 1 00:22:03.040399 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:22:03.040424 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:03.040433 kernel: BIOS-provided physical RAM map: Nov 1 00:22:03.040439 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 1 00:22:03.040444 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 1 00:22:03.040454 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:22:03.040460 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 1 00:22:03.040466 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 1 00:22:03.040471 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:22:03.040477 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:22:03.040483 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:22:03.040489 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:22:03.040495 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 1 00:22:03.040865 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:22:03.040876 kernel: NX (Execute Disable) protection: active Nov 1 00:22:03.040883 kernel: APIC: Static calls initialized Nov 1 00:22:03.040889 kernel: SMBIOS 2.8 present. Nov 1 00:22:03.040896 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 1 00:22:03.040902 kernel: Hypervisor detected: KVM Nov 1 00:22:03.040912 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:22:03.040918 kernel: kvm-clock: using sched offset of 5723100550 cycles Nov 1 00:22:03.040925 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:22:03.040932 kernel: tsc: Detected 2000.000 MHz processor Nov 1 00:22:03.040938 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:22:03.040945 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:22:03.040951 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 1 00:22:03.040958 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:22:03.040964 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:22:03.040973 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 1 00:22:03.040979 kernel: Using GB pages for direct mapping Nov 1 00:22:03.040985 kernel: ACPI: Early table checksum verification disabled Nov 1 00:22:03.040991 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 1 00:22:03.040997 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041004 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041010 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041016 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:22:03.041022 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041031 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041037 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041044 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041054 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 1 00:22:03.041060 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 1 00:22:03.041067 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:22:03.041075 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 1 00:22:03.041082 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 1 00:22:03.041088 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 1 00:22:03.041095 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 1 00:22:03.041101 kernel: No NUMA configuration found Nov 1 00:22:03.041108 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 1 00:22:03.041114 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Nov 1 00:22:03.041121 kernel: Zone ranges: Nov 1 00:22:03.041130 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:22:03.041137 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:22:03.041143 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:22:03.041149 kernel: Movable zone start for each node Nov 1 00:22:03.041156 kernel: Early memory node ranges Nov 1 00:22:03.041162 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:22:03.041168 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 1 00:22:03.041175 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:22:03.041181 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 1 00:22:03.041190 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:22:03.041196 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:22:03.041203 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 1 00:22:03.041209 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:22:03.041216 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:22:03.041222 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:22:03.041229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:22:03.041235 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:22:03.041242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:22:03.041250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:22:03.041257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:22:03.041263 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:22:03.041270 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:22:03.041276 kernel: TSC deadline timer available Nov 1 00:22:03.041283 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:22:03.041289 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:22:03.041295 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:22:03.041302 kernel: kvm-guest: setup PV sched yield Nov 1 00:22:03.041308 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:22:03.041317 kernel: Booting paravirtualized kernel on KVM Nov 1 00:22:03.041324 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:22:03.041331 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:22:03.041338 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:22:03.041344 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:22:03.041350 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:22:03.041398 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:22:03.041409 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:22:03.041417 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:03.041429 kernel: random: crng init done Nov 1 00:22:03.041435 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:22:03.041442 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:22:03.041448 kernel: Fallback order for Node 0: 0 Nov 1 00:22:03.041455 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 1 00:22:03.041461 kernel: Policy zone: Normal Nov 1 00:22:03.041468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:22:03.041474 kernel: software IO TLB: area num 2. Nov 1 00:22:03.041484 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 227300K reserved, 0K cma-reserved) Nov 1 00:22:03.041490 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:22:03.041497 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:22:03.041503 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:22:03.041510 kernel: Dynamic Preempt: voluntary Nov 1 00:22:03.041516 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:22:03.041527 kernel: rcu: RCU event tracing is enabled. Nov 1 00:22:03.041534 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:22:03.041541 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:22:03.041550 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:22:03.041557 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:22:03.041563 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:22:03.041569 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:22:03.041576 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:22:03.041582 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:22:03.041588 kernel: Console: colour VGA+ 80x25 Nov 1 00:22:03.041595 kernel: printk: console [tty0] enabled Nov 1 00:22:03.041601 kernel: printk: console [ttyS0] enabled Nov 1 00:22:03.041610 kernel: ACPI: Core revision 20230628 Nov 1 00:22:03.041616 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:22:03.041623 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:22:03.041629 kernel: x2apic enabled Nov 1 00:22:03.041643 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:22:03.041652 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:22:03.041659 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:22:03.041666 kernel: kvm-guest: setup PV IPIs Nov 1 00:22:03.041672 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:22:03.041679 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:22:03.041686 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 1 00:22:03.041692 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:22:03.041701 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:22:03.041708 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:22:03.041715 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:22:03.041721 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:22:03.041728 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:22:03.041737 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:22:03.041744 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:22:03.042337 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:22:03.042413 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:22:03.042424 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:22:03.042432 kernel: active return thunk: srso_alias_return_thunk Nov 1 00:22:03.042439 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:22:03.042446 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 1 00:22:03.042458 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:22:03.042465 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:22:03.042472 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:22:03.042479 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:22:03.042486 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 00:22:03.042492 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:22:03.042499 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 1 00:22:03.042506 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 1 00:22:03.042512 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:22:03.042522 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:22:03.042528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:22:03.042535 kernel: landlock: Up and running. Nov 1 00:22:03.042542 kernel: SELinux: Initializing. Nov 1 00:22:03.042549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.042555 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.042562 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 1 00:22:03.042569 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:03.042576 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:03.042585 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:03.042592 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:22:03.042599 kernel: ... version: 0 Nov 1 00:22:03.042605 kernel: ... bit width: 48 Nov 1 00:22:03.042612 kernel: ... generic registers: 6 Nov 1 00:22:03.042620 kernel: ... value mask: 0000ffffffffffff Nov 1 00:22:03.042627 kernel: ... max period: 00007fffffffffff Nov 1 00:22:03.042634 kernel: ... fixed-purpose events: 0 Nov 1 00:22:03.042640 kernel: ... event mask: 000000000000003f Nov 1 00:22:03.042649 kernel: signal: max sigframe size: 3376 Nov 1 00:22:03.042656 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:22:03.042663 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:22:03.042670 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:22:03.042677 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:22:03.042683 kernel: .... node #0, CPUs: #1 Nov 1 00:22:03.042690 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:22:03.042696 kernel: smpboot: Max logical packages: 1 Nov 1 00:22:03.042703 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 1 00:22:03.042712 kernel: devtmpfs: initialized Nov 1 00:22:03.042719 kernel: x86/mm: Memory block size: 128MB Nov 1 00:22:03.042726 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:22:03.042733 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:22:03.042739 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:22:03.042746 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:22:03.042752 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:22:03.042759 kernel: audit: type=2000 audit(1761956522.839:1): state=initialized audit_enabled=0 res=1 Nov 1 00:22:03.042766 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:22:03.042775 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:22:03.042782 kernel: cpuidle: using governor menu Nov 1 00:22:03.042788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:22:03.042795 kernel: dca service started, version 1.12.1 Nov 1 00:22:03.042802 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:22:03.042808 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:22:03.042815 kernel: PCI: Using configuration type 1 for base access Nov 1 00:22:03.042822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:22:03.042829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:22:03.042838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:22:03.042845 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:22:03.042851 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:22:03.042858 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:22:03.042864 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:22:03.042871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:22:03.042878 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:22:03.042885 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:22:03.042891 kernel: ACPI: Interpreter enabled Nov 1 00:22:03.042900 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:22:03.042907 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:22:03.042913 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:22:03.042920 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:22:03.042927 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:22:03.042933 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:22:03.043142 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:22:03.043281 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:22:03.043475 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:22:03.043489 kernel: PCI host bridge to bus 0000:00 Nov 1 00:22:03.043626 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:22:03.045493 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:22:03.045619 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:22:03.045738 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 00:22:03.045852 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:22:03.045975 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 1 00:22:03.046089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:22:03.046236 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:22:03.046407 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:22:03.046552 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:22:03.046679 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:22:03.046814 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:22:03.046939 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:22:03.047079 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:22:03.047206 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:22:03.047333 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:22:03.049578 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:22:03.049729 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:22:03.049867 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:22:03.049993 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:22:03.050120 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:22:03.050245 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:22:03.050425 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:22:03.050565 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:22:03.050702 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:22:03.050834 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 1 00:22:03.050958 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:22:03.051092 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:22:03.051217 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:22:03.051227 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:22:03.051234 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:22:03.051241 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:22:03.051252 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:22:03.051259 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:22:03.051265 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:22:03.051273 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:22:03.051279 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:22:03.051286 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:22:03.051293 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:22:03.051300 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:22:03.051306 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:22:03.051315 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:22:03.051322 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:22:03.051328 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:22:03.051335 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:22:03.051343 kernel: iommu: Default domain type: Translated Nov 1 00:22:03.051350 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:22:03.053496 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:22:03.053506 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:22:03.053514 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 1 00:22:03.053526 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 1 00:22:03.053673 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:22:03.053802 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:22:03.053926 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:22:03.053936 kernel: vgaarb: loaded Nov 1 00:22:03.053943 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:22:03.053950 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:22:03.053956 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:22:03.053967 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:22:03.053974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:22:03.053981 kernel: pnp: PnP ACPI init Nov 1 00:22:03.054307 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:22:03.054317 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:22:03.054325 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:22:03.054331 kernel: NET: Registered PF_INET protocol family Nov 1 00:22:03.054338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:22:03.054349 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:22:03.054381 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:22:03.054389 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:22:03.054396 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:22:03.054403 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:22:03.054410 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.054417 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.054446 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:22:03.054496 kernel: NET: Registered PF_XDP protocol family Nov 1 00:22:03.054638 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:22:03.054756 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:22:03.054870 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:22:03.054984 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 00:22:03.055098 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:22:03.055211 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 1 00:22:03.055221 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:22:03.055228 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:22:03.057404 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 1 00:22:03.057415 kernel: Initialise system trusted keyrings Nov 1 00:22:03.057422 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:22:03.057430 kernel: Key type asymmetric registered Nov 1 00:22:03.057436 kernel: Asymmetric key parser 'x509' registered Nov 1 00:22:03.057443 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:22:03.057450 kernel: io scheduler mq-deadline registered Nov 1 00:22:03.057457 kernel: io scheduler kyber registered Nov 1 00:22:03.057464 kernel: io scheduler bfq registered Nov 1 00:22:03.057475 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:22:03.057483 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:22:03.057491 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:22:03.057498 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:22:03.057505 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:22:03.057512 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:22:03.057519 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:22:03.057526 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:22:03.057673 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:22:03.057688 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 1 00:22:03.057818 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:22:03.057980 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:22:02 UTC (1761956522) Nov 1 00:22:03.058112 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:22:03.058123 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:22:03.058131 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:22:03.058138 kernel: Segment Routing with IPv6 Nov 1 00:22:03.058145 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:22:03.058157 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:22:03.058164 kernel: Key type dns_resolver registered Nov 1 00:22:03.058171 kernel: IPI shorthand broadcast: enabled Nov 1 00:22:03.058178 kernel: sched_clock: Marking stable (912008090, 352888800)->(1406738690, -141841800) Nov 1 00:22:03.058184 kernel: registered taskstats version 1 Nov 1 00:22:03.058191 kernel: Loading compiled-in X.509 certificates Nov 1 00:22:03.058199 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:22:03.058206 kernel: Key type .fscrypt registered Nov 1 00:22:03.058212 kernel: Key type fscrypt-provisioning registered Nov 1 00:22:03.058222 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:22:03.058229 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:22:03.058236 kernel: ima: No architecture policies found Nov 1 00:22:03.058243 kernel: clk: Disabling unused clocks Nov 1 00:22:03.058250 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:22:03.058257 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:22:03.058264 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:22:03.058271 kernel: Run /init as init process Nov 1 00:22:03.058278 kernel: with arguments: Nov 1 00:22:03.058288 kernel: /init Nov 1 00:22:03.058295 kernel: with environment: Nov 1 00:22:03.058301 kernel: HOME=/ Nov 1 00:22:03.058308 kernel: TERM=linux Nov 1 00:22:03.058317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:22:03.058326 systemd[1]: Detected virtualization kvm. Nov 1 00:22:03.058334 systemd[1]: Detected architecture x86-64. Nov 1 00:22:03.058342 systemd[1]: Running in initrd. Nov 1 00:22:03.059814 systemd[1]: No hostname configured, using default hostname. Nov 1 00:22:03.059827 systemd[1]: Hostname set to . Nov 1 00:22:03.059835 systemd[1]: Initializing machine ID from random generator. Nov 1 00:22:03.059842 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:22:03.059850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:03.059876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:03.059889 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:22:03.059897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:22:03.059905 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:22:03.059912 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:22:03.059922 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:22:03.059929 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:22:03.059939 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:03.059947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:03.059955 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:22:03.059963 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:22:03.059970 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:22:03.059978 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:22:03.059985 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:22:03.059993 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:22:03.060001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:22:03.060011 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:22:03.060018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:03.060026 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:03.060034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:03.060041 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:22:03.060049 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:22:03.060057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:22:03.060064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:22:03.060074 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:22:03.060081 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:22:03.060089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:22:03.060097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:03.060127 systemd-journald[178]: Collecting audit messages is disabled. Nov 1 00:22:03.060151 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:22:03.060162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:03.060170 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:22:03.060182 systemd-journald[178]: Journal started Nov 1 00:22:03.060199 systemd-journald[178]: Runtime Journal (/run/log/journal/08650f1ff8ae46699efb7f6fef1c7bb8) is 8.0M, max 78.3M, 70.3M free. Nov 1 00:22:03.042451 systemd-modules-load[179]: Inserted module 'overlay' Nov 1 00:22:03.064460 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:22:03.076396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:22:03.078460 kernel: Bridge firewalling registered Nov 1 00:22:03.078549 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 1 00:22:03.160052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:03.163254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:03.172731 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:03.180563 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:22:03.183641 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:22:03.196607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:22:03.208736 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:03.216476 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:03.235673 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:22:03.238452 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:03.241749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:03.249505 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:22:03.258632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:22:03.259881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:03.276623 dracut-cmdline[210]: dracut-dracut-053 Nov 1 00:22:03.280866 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:03.296833 systemd-resolved[211]: Positive Trust Anchors: Nov 1 00:22:03.296848 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:03.296875 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:22:03.301577 systemd-resolved[211]: Defaulting to hostname 'linux'. Nov 1 00:22:03.307153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:22:03.308596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:03.377410 kernel: SCSI subsystem initialized Nov 1 00:22:03.387390 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:22:03.400473 kernel: iscsi: registered transport (tcp) Nov 1 00:22:03.425575 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:22:03.425634 kernel: QLogic iSCSI HBA Driver Nov 1 00:22:03.492054 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:22:03.500519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:22:03.538554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:22:03.538610 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:22:03.540999 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:22:03.590404 kernel: raid6: avx2x4 gen() 25505 MB/s Nov 1 00:22:03.608390 kernel: raid6: avx2x2 gen() 23197 MB/s Nov 1 00:22:03.626727 kernel: raid6: avx2x1 gen() 20113 MB/s Nov 1 00:22:03.626775 kernel: raid6: using algorithm avx2x4 gen() 25505 MB/s Nov 1 00:22:03.649330 kernel: raid6: .... xor() 2887 MB/s, rmw enabled Nov 1 00:22:03.649417 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:22:03.671498 kernel: xor: automatically using best checksumming function avx Nov 1 00:22:03.813502 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:22:03.832486 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:22:03.839545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:03.857644 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 1 00:22:03.862780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:03.871503 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:22:03.899673 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 1 00:22:03.941212 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:22:03.947528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:22:04.025621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:04.034590 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:22:04.053474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:22:04.057770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:22:04.060475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:04.063431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:22:04.073564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:22:04.101184 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:22:04.122559 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:22:04.131903 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 1 00:22:04.140475 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:22:04.145681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:04.146737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:04.149395 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:04.151516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:04.151575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:04.155084 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:04.164397 kernel: libata version 3.00 loaded. Nov 1 00:22:04.165743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:04.342436 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:22:04.342719 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:22:04.344488 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:22:04.344665 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:22:04.351429 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:22:04.351453 kernel: AES CTR mode by8 optimization enabled Nov 1 00:22:04.357473 kernel: scsi host1: ahci Nov 1 00:22:04.361424 kernel: scsi host2: ahci Nov 1 00:22:04.366518 kernel: scsi host3: ahci Nov 1 00:22:04.366708 kernel: scsi host4: ahci Nov 1 00:22:04.366870 kernel: scsi host5: ahci Nov 1 00:22:04.367392 kernel: scsi host6: ahci Nov 1 00:22:04.367570 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 Nov 1 00:22:04.367583 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 Nov 1 00:22:04.367593 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 Nov 1 00:22:04.367603 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 Nov 1 00:22:04.367618 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 Nov 1 00:22:04.367627 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 Nov 1 00:22:04.512778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:04.524526 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:04.540585 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:04.687400 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.687448 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.687461 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.689386 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.692380 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.695382 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.722442 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 1 00:22:04.726389 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 1 00:22:04.726618 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:22:04.750389 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 1 00:22:04.750595 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:22:04.761047 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:22:04.761070 kernel: GPT:9289727 != 167739391 Nov 1 00:22:04.761082 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:22:04.764432 kernel: GPT:9289727 != 167739391 Nov 1 00:22:04.767757 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:22:04.767783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:04.773814 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:22:04.815412 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (456) Nov 1 00:22:04.820403 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (439) Nov 1 00:22:04.822292 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 1 00:22:04.832038 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 1 00:22:04.839719 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 1 00:22:04.842324 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 1 00:22:04.852825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:22:04.866913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:22:04.875325 disk-uuid[567]: Primary Header is updated. Nov 1 00:22:04.875325 disk-uuid[567]: Secondary Entries is updated. Nov 1 00:22:04.875325 disk-uuid[567]: Secondary Header is updated. Nov 1 00:22:04.883397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:04.890390 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:04.899405 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:05.898465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:05.899447 disk-uuid[568]: The operation has completed successfully. Nov 1 00:22:05.954678 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:22:05.954828 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:22:05.965495 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:22:05.971685 sh[585]: Success Nov 1 00:22:05.988039 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:22:06.037731 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:22:06.047470 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:22:06.049628 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:22:06.075819 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:22:06.075845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:06.079018 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:22:06.084322 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:22:06.084345 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:22:06.095376 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:22:06.097977 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:22:06.099594 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:22:06.107684 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:22:06.113503 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:22:06.133599 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:06.133634 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:06.133647 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:22:06.141839 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:22:06.141866 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:22:06.156007 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:22:06.158384 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:06.168405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:22:06.175615 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:22:06.259764 ignition[692]: Ignition 2.19.0 Nov 1 00:22:06.260882 ignition[692]: Stage: fetch-offline Nov 1 00:22:06.260929 ignition[692]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.260941 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:06.264046 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:22:06.261047 ignition[692]: parsed url from cmdline: "" Nov 1 00:22:06.267264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:22:06.261052 ignition[692]: no config URL provided Nov 1 00:22:06.261058 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:22:06.261069 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:22:06.261075 ignition[692]: failed to fetch config: resource requires networking Nov 1 00:22:06.261303 ignition[692]: Ignition finished successfully Nov 1 00:22:06.280552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:22:06.299864 systemd-networkd[773]: lo: Link UP Nov 1 00:22:06.299881 systemd-networkd[773]: lo: Gained carrier Nov 1 00:22:06.301616 systemd-networkd[773]: Enumeration completed Nov 1 00:22:06.301699 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:22:06.303019 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:06.303025 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:06.303186 systemd[1]: Reached target network.target - Network. Nov 1 00:22:06.304683 systemd-networkd[773]: eth0: Link UP Nov 1 00:22:06.304689 systemd-networkd[773]: eth0: Gained carrier Nov 1 00:22:06.304696 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:06.309881 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:22:06.331277 ignition[775]: Ignition 2.19.0 Nov 1 00:22:06.331298 ignition[775]: Stage: fetch Nov 1 00:22:06.331509 ignition[775]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.331525 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:06.331629 ignition[775]: parsed url from cmdline: "" Nov 1 00:22:06.331634 ignition[775]: no config URL provided Nov 1 00:22:06.331641 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:22:06.331651 ignition[775]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:22:06.331670 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 1 00:22:06.331847 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:22:06.532734 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 1 00:22:06.532948 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:22:06.933751 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 1 00:22:06.933958 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:22:07.029450 systemd-networkd[773]: eth0: DHCPv4 address 172.237.159.149/24, gateway 172.237.159.1 acquired from 23.205.167.117 Nov 1 00:22:07.734091 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 1 00:22:07.826666 ignition[775]: PUT result: OK Nov 1 00:22:07.826728 ignition[775]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 1 00:22:07.935307 ignition[775]: GET result: OK Nov 1 00:22:07.935451 ignition[775]: parsing config with SHA512: 204fc89bfe764a145858b7ab4da18f743e7bced28b77cef61679860c89550df678002f815a92652f4cbb7354c484cc84415df5376c8786b3dace549a678f4c51 Nov 1 00:22:07.939157 unknown[775]: fetched base config from "system" Nov 1 00:22:07.939175 unknown[775]: fetched base config from "system" Nov 1 00:22:07.939484 ignition[775]: fetch: fetch complete Nov 1 00:22:07.939183 unknown[775]: fetched user config from "akamai" Nov 1 00:22:07.939491 ignition[775]: fetch: fetch passed Nov 1 00:22:07.942369 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:22:07.939537 ignition[775]: Ignition finished successfully Nov 1 00:22:07.948584 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:22:07.963955 ignition[783]: Ignition 2.19.0 Nov 1 00:22:07.963977 ignition[783]: Stage: kargs Nov 1 00:22:07.964130 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:07.967870 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:22:07.964143 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:07.964845 ignition[783]: kargs: kargs passed Nov 1 00:22:07.964894 ignition[783]: Ignition finished successfully Nov 1 00:22:07.970606 systemd-networkd[773]: eth0: Gained IPv6LL Nov 1 00:22:07.977538 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:22:07.991090 ignition[789]: Ignition 2.19.0 Nov 1 00:22:07.991110 ignition[789]: Stage: disks Nov 1 00:22:07.991259 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:07.995723 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:22:07.991271 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:08.016658 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:22:07.991947 ignition[789]: disks: disks passed Nov 1 00:22:08.017881 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:22:07.991987 ignition[789]: Ignition finished successfully Nov 1 00:22:08.019933 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:22:08.022158 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:22:08.024458 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:22:08.033515 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:22:08.051724 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:22:08.055725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:22:08.062600 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:22:08.154550 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:22:08.154324 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:22:08.155853 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:22:08.163490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:22:08.167880 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:22:08.171406 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:22:08.173166 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:22:08.173193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:22:08.182143 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (806) Nov 1 00:22:08.187386 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:08.187420 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:08.189806 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:22:08.194137 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:22:08.201330 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:22:08.201393 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:22:08.203527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:22:08.209174 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:22:08.261681 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:22:08.268008 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:22:08.273728 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:22:08.279434 initrd-setup-root[852]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:22:08.390792 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:22:08.401488 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:22:08.405827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:22:08.412630 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:22:08.417159 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:08.449869 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:22:08.452758 ignition[920]: INFO : Ignition 2.19.0 Nov 1 00:22:08.452758 ignition[920]: INFO : Stage: mount Nov 1 00:22:08.452758 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:08.452758 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:08.458817 ignition[920]: INFO : mount: mount passed Nov 1 00:22:08.458817 ignition[920]: INFO : Ignition finished successfully Nov 1 00:22:08.455956 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:22:08.463499 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:22:09.160512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:22:09.177392 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (931) Nov 1 00:22:09.184618 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:09.184669 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:09.184682 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:22:09.192785 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:22:09.192817 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:22:09.197628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:22:09.228188 ignition[948]: INFO : Ignition 2.19.0 Nov 1 00:22:09.228188 ignition[948]: INFO : Stage: files Nov 1 00:22:09.230588 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:09.230588 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:09.230588 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:22:09.234419 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:22:09.234419 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:22:09.236177 unknown[948]: wrote ssh authorized keys file for user: core Nov 1 00:22:09.243947 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:22:09.440645 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:22:09.484056 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:09.505709 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:22:10.061752 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:22:10.345975 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:10.349427 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:10.355918 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:10.355918 ignition[948]: INFO : files: files passed Nov 1 00:22:10.355918 ignition[948]: INFO : Ignition finished successfully Nov 1 00:22:10.363321 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:22:10.390475 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:22:10.394500 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:22:10.396214 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:22:10.398480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:22:10.410643 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:10.410643 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:10.414190 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:10.414688 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:22:10.417006 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:22:10.431489 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:22:10.460011 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:22:10.460221 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:22:10.462415 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:22:10.464043 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:22:10.466011 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:22:10.476521 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:22:10.489490 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:22:10.496641 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:22:10.507512 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:10.508926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:10.511261 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:22:10.513605 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:22:10.513702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:22:10.516108 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:22:10.517601 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:22:10.519753 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:22:10.521581 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:22:10.523613 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:22:10.525808 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:22:10.528035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:22:10.530394 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:22:10.532407 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:22:10.534139 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:22:10.536120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:22:10.536216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:22:10.538650 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:10.539873 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:10.542022 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:22:10.543081 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:10.545555 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:22:10.545659 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:22:10.548582 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:22:10.548690 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:22:10.550042 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:22:10.550138 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:22:10.559819 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:22:10.564705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:22:10.565569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:22:10.565741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:10.568546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:22:10.568647 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:22:10.577829 ignition[1000]: INFO : Ignition 2.19.0 Nov 1 00:22:10.577829 ignition[1000]: INFO : Stage: umount Nov 1 00:22:10.577829 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:10.577829 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:10.593824 ignition[1000]: INFO : umount: umount passed Nov 1 00:22:10.593824 ignition[1000]: INFO : Ignition finished successfully Nov 1 00:22:10.584859 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:22:10.586491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:22:10.590420 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:22:10.590560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:22:10.596540 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:22:10.596595 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:22:10.598928 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:22:10.598979 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:22:10.599860 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:22:10.599911 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:22:10.602578 systemd[1]: Stopped target network.target - Network. Nov 1 00:22:10.603733 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:22:10.603788 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:22:10.606796 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:22:10.608205 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:22:10.612606 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:10.613555 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:22:10.615565 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:22:10.639033 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:22:10.639096 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:22:10.640936 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:22:10.640982 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:22:10.643053 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:22:10.643103 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:22:10.644752 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:22:10.644802 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:22:10.646862 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:22:10.649020 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:22:10.651472 systemd-networkd[773]: eth0: DHCPv6 lease lost Nov 1 00:22:10.653726 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:22:10.654325 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:22:10.655542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:22:10.659796 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:22:10.659936 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:22:10.661480 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:22:10.661602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:22:10.664872 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:22:10.664932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:10.666224 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:22:10.666276 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:22:10.675659 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:22:10.677068 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:22:10.677124 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:22:10.678099 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:22:10.678152 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:10.680437 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:22:10.680487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:10.682238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:22:10.682287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:10.684181 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:10.700047 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:22:10.700177 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:22:10.702144 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:22:10.702311 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:10.704762 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:22:10.704831 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:10.706300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:22:10.706343 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:10.708107 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:22:10.708159 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:22:10.710545 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:22:10.710595 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:22:10.712589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:10.712638 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:10.722489 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:22:10.724006 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:22:10.724062 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:10.726898 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:22:10.726952 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:10.730013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:22:10.730065 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:10.731752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:10.731804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:10.734197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:22:10.734297 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:22:10.736214 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:22:10.743515 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:22:10.753969 systemd[1]: Switching root. Nov 1 00:22:10.787971 systemd-journald[178]: Journal stopped Nov 1 00:22:03.040399 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:22:03.040424 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:03.040433 kernel: BIOS-provided physical RAM map: Nov 1 00:22:03.040439 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 1 00:22:03.040444 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 1 00:22:03.040454 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:22:03.040460 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 1 00:22:03.040466 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 1 00:22:03.040471 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:22:03.040477 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:22:03.040483 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:22:03.040489 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:22:03.040495 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 1 00:22:03.040865 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:22:03.040876 kernel: NX (Execute Disable) protection: active Nov 1 00:22:03.040883 kernel: APIC: Static calls initialized Nov 1 00:22:03.040889 kernel: SMBIOS 2.8 present. Nov 1 00:22:03.040896 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 1 00:22:03.040902 kernel: Hypervisor detected: KVM Nov 1 00:22:03.040912 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:22:03.040918 kernel: kvm-clock: using sched offset of 5723100550 cycles Nov 1 00:22:03.040925 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:22:03.040932 kernel: tsc: Detected 2000.000 MHz processor Nov 1 00:22:03.040938 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:22:03.040945 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:22:03.040951 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 1 00:22:03.040958 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:22:03.040964 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:22:03.040973 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 1 00:22:03.040979 kernel: Using GB pages for direct mapping Nov 1 00:22:03.040985 kernel: ACPI: Early table checksum verification disabled Nov 1 00:22:03.040991 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 1 00:22:03.040997 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041004 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041010 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041016 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:22:03.041022 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041031 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041037 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041044 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:03.041054 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 1 00:22:03.041060 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 1 00:22:03.041067 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:22:03.041075 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 1 00:22:03.041082 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 1 00:22:03.041088 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 1 00:22:03.041095 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 1 00:22:03.041101 kernel: No NUMA configuration found Nov 1 00:22:03.041108 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 1 00:22:03.041114 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Nov 1 00:22:03.041121 kernel: Zone ranges: Nov 1 00:22:03.041130 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:22:03.041137 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:22:03.041143 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:22:03.041149 kernel: Movable zone start for each node Nov 1 00:22:03.041156 kernel: Early memory node ranges Nov 1 00:22:03.041162 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:22:03.041168 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 1 00:22:03.041175 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:22:03.041181 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 1 00:22:03.041190 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:22:03.041196 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:22:03.041203 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 1 00:22:03.041209 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:22:03.041216 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:22:03.041222 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:22:03.041229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:22:03.041235 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:22:03.041242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:22:03.041250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:22:03.041257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:22:03.041263 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:22:03.041270 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:22:03.041276 kernel: TSC deadline timer available Nov 1 00:22:03.041283 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:22:03.041289 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:22:03.041295 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:22:03.041302 kernel: kvm-guest: setup PV sched yield Nov 1 00:22:03.041308 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:22:03.041317 kernel: Booting paravirtualized kernel on KVM Nov 1 00:22:03.041324 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:22:03.041331 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:22:03.041338 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:22:03.041344 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:22:03.041350 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:22:03.041398 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:22:03.041409 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:22:03.041417 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:03.041429 kernel: random: crng init done Nov 1 00:22:03.041435 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:22:03.041442 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:22:03.041448 kernel: Fallback order for Node 0: 0 Nov 1 00:22:03.041455 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 1 00:22:03.041461 kernel: Policy zone: Normal Nov 1 00:22:03.041468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:22:03.041474 kernel: software IO TLB: area num 2. Nov 1 00:22:03.041484 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 227300K reserved, 0K cma-reserved) Nov 1 00:22:03.041490 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:22:03.041497 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:22:03.041503 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:22:03.041510 kernel: Dynamic Preempt: voluntary Nov 1 00:22:03.041516 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:22:03.041527 kernel: rcu: RCU event tracing is enabled. Nov 1 00:22:03.041534 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:22:03.041541 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:22:03.041550 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:22:03.041557 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:22:03.041563 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:22:03.041569 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:22:03.041576 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:22:03.041582 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:22:03.041588 kernel: Console: colour VGA+ 80x25 Nov 1 00:22:03.041595 kernel: printk: console [tty0] enabled Nov 1 00:22:03.041601 kernel: printk: console [ttyS0] enabled Nov 1 00:22:03.041610 kernel: ACPI: Core revision 20230628 Nov 1 00:22:03.041616 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:22:03.041623 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:22:03.041629 kernel: x2apic enabled Nov 1 00:22:03.041643 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:22:03.041652 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:22:03.041659 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:22:03.041666 kernel: kvm-guest: setup PV IPIs Nov 1 00:22:03.041672 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:22:03.041679 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:22:03.041686 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 1 00:22:03.041692 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:22:03.041701 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:22:03.041708 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:22:03.041715 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:22:03.041721 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:22:03.041728 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:22:03.041737 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:22:03.041744 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:22:03.042337 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:22:03.042413 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:22:03.042424 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:22:03.042432 kernel: active return thunk: srso_alias_return_thunk Nov 1 00:22:03.042439 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:22:03.042446 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 1 00:22:03.042458 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:22:03.042465 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:22:03.042472 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:22:03.042479 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:22:03.042486 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 00:22:03.042492 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:22:03.042499 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 1 00:22:03.042506 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 1 00:22:03.042512 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:22:03.042522 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:22:03.042528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:22:03.042535 kernel: landlock: Up and running. Nov 1 00:22:03.042542 kernel: SELinux: Initializing. Nov 1 00:22:03.042549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.042555 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.042562 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 1 00:22:03.042569 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:03.042576 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:03.042585 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:03.042592 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:22:03.042599 kernel: ... version: 0 Nov 1 00:22:03.042605 kernel: ... bit width: 48 Nov 1 00:22:03.042612 kernel: ... generic registers: 6 Nov 1 00:22:03.042620 kernel: ... value mask: 0000ffffffffffff Nov 1 00:22:03.042627 kernel: ... max period: 00007fffffffffff Nov 1 00:22:03.042634 kernel: ... fixed-purpose events: 0 Nov 1 00:22:03.042640 kernel: ... event mask: 000000000000003f Nov 1 00:22:03.042649 kernel: signal: max sigframe size: 3376 Nov 1 00:22:03.042656 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:22:03.042663 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:22:03.042670 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:22:03.042677 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:22:03.042683 kernel: .... node #0, CPUs: #1 Nov 1 00:22:03.042690 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:22:03.042696 kernel: smpboot: Max logical packages: 1 Nov 1 00:22:03.042703 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 1 00:22:03.042712 kernel: devtmpfs: initialized Nov 1 00:22:03.042719 kernel: x86/mm: Memory block size: 128MB Nov 1 00:22:03.042726 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:22:03.042733 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:22:03.042739 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:22:03.042746 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:22:03.042752 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:22:03.042759 kernel: audit: type=2000 audit(1761956522.839:1): state=initialized audit_enabled=0 res=1 Nov 1 00:22:03.042766 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:22:03.042775 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:22:03.042782 kernel: cpuidle: using governor menu Nov 1 00:22:03.042788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:22:03.042795 kernel: dca service started, version 1.12.1 Nov 1 00:22:03.042802 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:22:03.042808 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:22:03.042815 kernel: PCI: Using configuration type 1 for base access Nov 1 00:22:03.042822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:22:03.042829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:22:03.042838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:22:03.042845 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:22:03.042851 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:22:03.042858 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:22:03.042864 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:22:03.042871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:22:03.042878 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:22:03.042885 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:22:03.042891 kernel: ACPI: Interpreter enabled Nov 1 00:22:03.042900 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:22:03.042907 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:22:03.042913 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:22:03.042920 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:22:03.042927 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:22:03.042933 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:22:03.043142 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:22:03.043281 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:22:03.043475 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:22:03.043489 kernel: PCI host bridge to bus 0000:00 Nov 1 00:22:03.043626 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:22:03.045493 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:22:03.045619 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:22:03.045738 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 00:22:03.045852 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:22:03.045975 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 1 00:22:03.046089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:22:03.046236 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:22:03.046407 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:22:03.046552 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:22:03.046679 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:22:03.046814 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:22:03.046939 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:22:03.047079 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:22:03.047206 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:22:03.047333 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:22:03.049578 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:22:03.049729 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:22:03.049867 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:22:03.049993 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:22:03.050120 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:22:03.050245 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:22:03.050425 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:22:03.050565 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:22:03.050702 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:22:03.050834 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 1 00:22:03.050958 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:22:03.051092 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:22:03.051217 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:22:03.051227 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:22:03.051234 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:22:03.051241 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:22:03.051252 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:22:03.051259 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:22:03.051265 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:22:03.051273 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:22:03.051279 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:22:03.051286 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:22:03.051293 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:22:03.051300 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:22:03.051306 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:22:03.051315 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:22:03.051322 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:22:03.051328 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:22:03.051335 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:22:03.051343 kernel: iommu: Default domain type: Translated Nov 1 00:22:03.051350 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:22:03.053496 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:22:03.053506 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:22:03.053514 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 1 00:22:03.053526 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 1 00:22:03.053673 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:22:03.053802 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:22:03.053926 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:22:03.053936 kernel: vgaarb: loaded Nov 1 00:22:03.053943 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:22:03.053950 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:22:03.053956 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:22:03.053967 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:22:03.053974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:22:03.053981 kernel: pnp: PnP ACPI init Nov 1 00:22:03.054307 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:22:03.054317 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:22:03.054325 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:22:03.054331 kernel: NET: Registered PF_INET protocol family Nov 1 00:22:03.054338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:22:03.054349 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:22:03.054381 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:22:03.054389 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:22:03.054396 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:22:03.054403 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:22:03.054410 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.054417 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:03.054446 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:22:03.054496 kernel: NET: Registered PF_XDP protocol family Nov 1 00:22:03.054638 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:22:03.054756 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:22:03.054870 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:22:03.054984 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 00:22:03.055098 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:22:03.055211 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 1 00:22:03.055221 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:22:03.055228 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:22:03.057404 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 1 00:22:03.057415 kernel: Initialise system trusted keyrings Nov 1 00:22:03.057422 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:22:03.057430 kernel: Key type asymmetric registered Nov 1 00:22:03.057436 kernel: Asymmetric key parser 'x509' registered Nov 1 00:22:03.057443 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:22:03.057450 kernel: io scheduler mq-deadline registered Nov 1 00:22:03.057457 kernel: io scheduler kyber registered Nov 1 00:22:03.057464 kernel: io scheduler bfq registered Nov 1 00:22:03.057475 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:22:03.057483 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:22:03.057491 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:22:03.057498 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:22:03.057505 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:22:03.057512 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:22:03.057519 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:22:03.057526 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:22:03.057673 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:22:03.057688 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 1 00:22:03.057818 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:22:03.057980 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:22:02 UTC (1761956522) Nov 1 00:22:03.058112 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:22:03.058123 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:22:03.058131 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:22:03.058138 kernel: Segment Routing with IPv6 Nov 1 00:22:03.058145 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:22:03.058157 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:22:03.058164 kernel: Key type dns_resolver registered Nov 1 00:22:03.058171 kernel: IPI shorthand broadcast: enabled Nov 1 00:22:03.058178 kernel: sched_clock: Marking stable (912008090, 352888800)->(1406738690, -141841800) Nov 1 00:22:03.058184 kernel: registered taskstats version 1 Nov 1 00:22:03.058191 kernel: Loading compiled-in X.509 certificates Nov 1 00:22:03.058199 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:22:03.058206 kernel: Key type .fscrypt registered Nov 1 00:22:03.058212 kernel: Key type fscrypt-provisioning registered Nov 1 00:22:03.058222 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:22:03.058229 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:22:03.058236 kernel: ima: No architecture policies found Nov 1 00:22:03.058243 kernel: clk: Disabling unused clocks Nov 1 00:22:03.058250 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:22:03.058257 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:22:03.058264 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:22:03.058271 kernel: Run /init as init process Nov 1 00:22:03.058278 kernel: with arguments: Nov 1 00:22:03.058288 kernel: /init Nov 1 00:22:03.058295 kernel: with environment: Nov 1 00:22:03.058301 kernel: HOME=/ Nov 1 00:22:03.058308 kernel: TERM=linux Nov 1 00:22:03.058317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:22:03.058326 systemd[1]: Detected virtualization kvm. Nov 1 00:22:03.058334 systemd[1]: Detected architecture x86-64. Nov 1 00:22:03.058342 systemd[1]: Running in initrd. Nov 1 00:22:03.059814 systemd[1]: No hostname configured, using default hostname. Nov 1 00:22:03.059827 systemd[1]: Hostname set to . Nov 1 00:22:03.059835 systemd[1]: Initializing machine ID from random generator. Nov 1 00:22:03.059842 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:22:03.059850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:03.059876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:03.059889 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:22:03.059897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:22:03.059905 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:22:03.059912 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:22:03.059922 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:22:03.059929 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:22:03.059939 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:03.059947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:03.059955 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:22:03.059963 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:22:03.059970 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:22:03.059978 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:22:03.059985 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:22:03.059993 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:22:03.060001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:22:03.060011 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:22:03.060018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:03.060026 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:03.060034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:03.060041 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:22:03.060049 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:22:03.060057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:22:03.060064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:22:03.060074 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:22:03.060081 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:22:03.060089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:22:03.060097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:03.060127 systemd-journald[178]: Collecting audit messages is disabled. Nov 1 00:22:03.060151 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:22:03.060162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:03.060170 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:22:03.060182 systemd-journald[178]: Journal started Nov 1 00:22:03.060199 systemd-journald[178]: Runtime Journal (/run/log/journal/08650f1ff8ae46699efb7f6fef1c7bb8) is 8.0M, max 78.3M, 70.3M free. Nov 1 00:22:03.042451 systemd-modules-load[179]: Inserted module 'overlay' Nov 1 00:22:03.064460 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:22:03.076396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:22:03.078460 kernel: Bridge firewalling registered Nov 1 00:22:03.078549 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 1 00:22:03.160052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:03.163254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:03.172731 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:03.180563 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:22:03.183641 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:22:03.196607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:22:03.208736 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:03.216476 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:03.235673 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:22:03.238452 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:03.241749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:03.249505 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:22:03.258632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:22:03.259881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:03.276623 dracut-cmdline[210]: dracut-dracut-053 Nov 1 00:22:03.280866 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:03.296833 systemd-resolved[211]: Positive Trust Anchors: Nov 1 00:22:03.296848 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:03.296875 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:22:03.301577 systemd-resolved[211]: Defaulting to hostname 'linux'. Nov 1 00:22:03.307153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:22:03.308596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:03.377410 kernel: SCSI subsystem initialized Nov 1 00:22:03.387390 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:22:03.400473 kernel: iscsi: registered transport (tcp) Nov 1 00:22:03.425575 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:22:03.425634 kernel: QLogic iSCSI HBA Driver Nov 1 00:22:03.492054 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:22:03.500519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:22:03.538554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:22:03.538610 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:22:03.540999 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:22:03.590404 kernel: raid6: avx2x4 gen() 25505 MB/s Nov 1 00:22:03.608390 kernel: raid6: avx2x2 gen() 23197 MB/s Nov 1 00:22:03.626727 kernel: raid6: avx2x1 gen() 20113 MB/s Nov 1 00:22:03.626775 kernel: raid6: using algorithm avx2x4 gen() 25505 MB/s Nov 1 00:22:03.649330 kernel: raid6: .... xor() 2887 MB/s, rmw enabled Nov 1 00:22:03.649417 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:22:03.671498 kernel: xor: automatically using best checksumming function avx Nov 1 00:22:03.813502 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:22:03.832486 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:22:03.839545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:03.857644 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 1 00:22:03.862780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:03.871503 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:22:03.899673 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 1 00:22:03.941212 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:22:03.947528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:22:04.025621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:04.034590 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:22:04.053474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:22:04.057770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:22:04.060475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:04.063431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:22:04.073564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:22:04.101184 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:22:04.122559 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:22:04.131903 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 1 00:22:04.140475 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:22:04.145681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:04.146737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:04.149395 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:04.151516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:04.151575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:04.155084 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:04.164397 kernel: libata version 3.00 loaded. Nov 1 00:22:04.165743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:04.342436 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:22:04.342719 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:22:04.344488 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:22:04.344665 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:22:04.351429 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:22:04.351453 kernel: AES CTR mode by8 optimization enabled Nov 1 00:22:04.357473 kernel: scsi host1: ahci Nov 1 00:22:04.361424 kernel: scsi host2: ahci Nov 1 00:22:04.366518 kernel: scsi host3: ahci Nov 1 00:22:04.366708 kernel: scsi host4: ahci Nov 1 00:22:04.366870 kernel: scsi host5: ahci Nov 1 00:22:04.367392 kernel: scsi host6: ahci Nov 1 00:22:04.367570 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 Nov 1 00:22:04.367583 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 Nov 1 00:22:04.367593 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 Nov 1 00:22:04.367603 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 Nov 1 00:22:04.367618 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 Nov 1 00:22:04.367627 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 Nov 1 00:22:04.512778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:04.524526 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:04.540585 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:04.687400 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.687448 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.687461 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.689386 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.692380 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.695382 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:22:04.722442 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 1 00:22:04.726389 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 1 00:22:04.726618 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:22:04.750389 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 1 00:22:04.750595 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:22:04.761047 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:22:04.761070 kernel: GPT:9289727 != 167739391 Nov 1 00:22:04.761082 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:22:04.764432 kernel: GPT:9289727 != 167739391 Nov 1 00:22:04.767757 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:22:04.767783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:04.773814 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:22:04.815412 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (456) Nov 1 00:22:04.820403 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (439) Nov 1 00:22:04.822292 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 1 00:22:04.832038 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 1 00:22:04.839719 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 1 00:22:04.842324 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 1 00:22:04.852825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:22:04.866913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:22:04.875325 disk-uuid[567]: Primary Header is updated. Nov 1 00:22:04.875325 disk-uuid[567]: Secondary Entries is updated. Nov 1 00:22:04.875325 disk-uuid[567]: Secondary Header is updated. Nov 1 00:22:04.883397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:04.890390 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:04.899405 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:05.898465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:22:05.899447 disk-uuid[568]: The operation has completed successfully. Nov 1 00:22:05.954678 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:22:05.954828 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:22:05.965495 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:22:05.971685 sh[585]: Success Nov 1 00:22:05.988039 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:22:06.037731 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:22:06.047470 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:22:06.049628 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:22:06.075819 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:22:06.075845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:06.079018 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:22:06.084322 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:22:06.084345 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:22:06.095376 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:22:06.097977 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:22:06.099594 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:22:06.107684 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:22:06.113503 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:22:06.133599 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:06.133634 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:06.133647 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:22:06.141839 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:22:06.141866 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:22:06.156007 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:22:06.158384 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:06.168405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:22:06.175615 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:22:06.259764 ignition[692]: Ignition 2.19.0 Nov 1 00:22:06.260882 ignition[692]: Stage: fetch-offline Nov 1 00:22:06.260929 ignition[692]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.260941 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:06.264046 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:22:06.261047 ignition[692]: parsed url from cmdline: "" Nov 1 00:22:06.267264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:22:06.261052 ignition[692]: no config URL provided Nov 1 00:22:06.261058 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:22:06.261069 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:22:06.261075 ignition[692]: failed to fetch config: resource requires networking Nov 1 00:22:06.261303 ignition[692]: Ignition finished successfully Nov 1 00:22:06.280552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:22:06.299864 systemd-networkd[773]: lo: Link UP Nov 1 00:22:06.299881 systemd-networkd[773]: lo: Gained carrier Nov 1 00:22:06.301616 systemd-networkd[773]: Enumeration completed Nov 1 00:22:06.301699 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:22:06.303019 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:06.303025 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:06.303186 systemd[1]: Reached target network.target - Network. Nov 1 00:22:06.304683 systemd-networkd[773]: eth0: Link UP Nov 1 00:22:06.304689 systemd-networkd[773]: eth0: Gained carrier Nov 1 00:22:06.304696 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:06.309881 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:22:06.331277 ignition[775]: Ignition 2.19.0 Nov 1 00:22:06.331298 ignition[775]: Stage: fetch Nov 1 00:22:06.331509 ignition[775]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.331525 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:06.331629 ignition[775]: parsed url from cmdline: "" Nov 1 00:22:06.331634 ignition[775]: no config URL provided Nov 1 00:22:06.331641 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:22:06.331651 ignition[775]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:22:06.331670 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 1 00:22:06.331847 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:22:06.532734 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 1 00:22:06.532948 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:22:06.933751 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 1 00:22:06.933958 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:22:07.029450 systemd-networkd[773]: eth0: DHCPv4 address 172.237.159.149/24, gateway 172.237.159.1 acquired from 23.205.167.117 Nov 1 00:22:07.734091 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 1 00:22:07.826666 ignition[775]: PUT result: OK Nov 1 00:22:07.826728 ignition[775]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 1 00:22:07.935307 ignition[775]: GET result: OK Nov 1 00:22:07.935451 ignition[775]: parsing config with SHA512: 204fc89bfe764a145858b7ab4da18f743e7bced28b77cef61679860c89550df678002f815a92652f4cbb7354c484cc84415df5376c8786b3dace549a678f4c51 Nov 1 00:22:07.939157 unknown[775]: fetched base config from "system" Nov 1 00:22:07.939175 unknown[775]: fetched base config from "system" Nov 1 00:22:07.939484 ignition[775]: fetch: fetch complete Nov 1 00:22:07.939183 unknown[775]: fetched user config from "akamai" Nov 1 00:22:07.939491 ignition[775]: fetch: fetch passed Nov 1 00:22:07.942369 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:22:07.939537 ignition[775]: Ignition finished successfully Nov 1 00:22:07.948584 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:22:07.963955 ignition[783]: Ignition 2.19.0 Nov 1 00:22:07.963977 ignition[783]: Stage: kargs Nov 1 00:22:07.964130 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:07.967870 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:22:07.964143 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:07.964845 ignition[783]: kargs: kargs passed Nov 1 00:22:07.964894 ignition[783]: Ignition finished successfully Nov 1 00:22:07.970606 systemd-networkd[773]: eth0: Gained IPv6LL Nov 1 00:22:07.977538 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:22:07.991090 ignition[789]: Ignition 2.19.0 Nov 1 00:22:07.991110 ignition[789]: Stage: disks Nov 1 00:22:07.991259 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:07.995723 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:22:07.991271 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:08.016658 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:22:07.991947 ignition[789]: disks: disks passed Nov 1 00:22:08.017881 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:22:07.991987 ignition[789]: Ignition finished successfully Nov 1 00:22:08.019933 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:22:08.022158 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:22:08.024458 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:22:08.033515 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:22:08.051724 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:22:08.055725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:22:08.062600 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:22:08.154550 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:22:08.154324 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:22:08.155853 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:22:08.163490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:22:08.167880 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:22:08.171406 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:22:08.173166 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:22:08.173193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:22:08.182143 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (806) Nov 1 00:22:08.187386 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:08.187420 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:08.189806 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:22:08.194137 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:22:08.201330 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:22:08.201393 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:22:08.203527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:22:08.209174 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:22:08.261681 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:22:08.268008 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:22:08.273728 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:22:08.279434 initrd-setup-root[852]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:22:08.390792 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:22:08.401488 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:22:08.405827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:22:08.412630 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:22:08.417159 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:08.449869 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:22:08.452758 ignition[920]: INFO : Ignition 2.19.0 Nov 1 00:22:08.452758 ignition[920]: INFO : Stage: mount Nov 1 00:22:08.452758 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:08.452758 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:08.458817 ignition[920]: INFO : mount: mount passed Nov 1 00:22:08.458817 ignition[920]: INFO : Ignition finished successfully Nov 1 00:22:08.455956 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:22:08.463499 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:22:09.160512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:22:09.177392 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (931) Nov 1 00:22:09.184618 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:09.184669 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:09.184682 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:22:09.192785 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:22:09.192817 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:22:09.197628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:22:09.228188 ignition[948]: INFO : Ignition 2.19.0 Nov 1 00:22:09.228188 ignition[948]: INFO : Stage: files Nov 1 00:22:09.230588 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:09.230588 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:09.230588 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:22:09.234419 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:22:09.234419 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:22:09.237475 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:22:09.236177 unknown[948]: wrote ssh authorized keys file for user: core Nov 1 00:22:09.243947 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:22:09.440645 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:22:09.484056 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:09.486039 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:09.505709 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:22:10.061752 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:22:10.345975 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:22:10.349427 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:10.355918 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:10.355918 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:10.355918 ignition[948]: INFO : files: files passed Nov 1 00:22:10.355918 ignition[948]: INFO : Ignition finished successfully Nov 1 00:22:10.363321 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:22:10.390475 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:22:10.394500 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:22:10.396214 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:22:10.398480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:22:10.410643 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:10.410643 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:10.414190 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:10.414688 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:22:10.417006 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:22:10.431489 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:22:10.460011 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:22:10.460221 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:22:10.462415 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:22:10.464043 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:22:10.466011 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:22:10.476521 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:22:10.489490 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:22:10.496641 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:22:10.507512 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:10.508926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:10.511261 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:22:10.513605 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:22:10.513702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:22:10.516108 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:22:10.517601 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:22:10.519753 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:22:10.521581 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:22:10.523613 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:22:10.525808 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:22:10.528035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:22:10.530394 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:22:10.532407 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:22:10.534139 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:22:10.536120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:22:10.536216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:22:10.538650 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:10.539873 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:10.542022 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:22:10.543081 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:10.545555 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:22:10.545659 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:22:10.548582 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:22:10.548690 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:22:10.550042 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:22:10.550138 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:22:10.559819 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:22:10.564705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:22:10.565569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:22:10.565741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:10.568546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:22:10.568647 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:22:10.577829 ignition[1000]: INFO : Ignition 2.19.0 Nov 1 00:22:10.577829 ignition[1000]: INFO : Stage: umount Nov 1 00:22:10.577829 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:10.577829 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:22:10.593824 ignition[1000]: INFO : umount: umount passed Nov 1 00:22:10.593824 ignition[1000]: INFO : Ignition finished successfully Nov 1 00:22:10.584859 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:22:10.586491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:22:10.590420 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:22:10.590560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:22:10.596540 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:22:10.596595 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:22:10.598928 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:22:10.598979 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:22:10.599860 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:22:10.599911 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:22:10.602578 systemd[1]: Stopped target network.target - Network. Nov 1 00:22:10.603733 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:22:10.603788 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:22:10.606796 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:22:10.608205 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:22:10.612606 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:10.613555 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:22:10.615565 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:22:10.639033 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:22:10.639096 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:22:10.640936 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:22:10.640982 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:22:10.643053 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:22:10.643103 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:22:10.644752 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:22:10.644802 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:22:10.646862 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:22:10.649020 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:22:10.651472 systemd-networkd[773]: eth0: DHCPv6 lease lost Nov 1 00:22:10.653726 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:22:10.654325 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:22:10.655542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:22:10.659796 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:22:10.659936 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:22:10.661480 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:22:10.661602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:22:10.664872 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:22:10.664932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:10.666224 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:22:10.666276 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:22:10.675659 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:22:10.677068 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:22:10.677124 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:22:10.678099 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:22:10.678152 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:10.680437 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:22:10.680487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:10.682238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:22:10.682287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:10.684181 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:10.700047 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:22:10.700177 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:22:10.702144 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:22:10.702311 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:10.704762 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:22:10.704831 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:10.706300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:22:10.706343 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:10.708107 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:22:10.708159 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:22:10.710545 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:22:10.710595 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:22:10.712589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:10.712638 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:10.722489 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:22:10.724006 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:22:10.724062 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:10.726898 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:22:10.726952 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:10.730013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:22:10.730065 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:10.731752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:10.731804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:10.734197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:22:10.734297 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:22:10.736214 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:22:10.743515 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:22:10.753969 systemd[1]: Switching root. Nov 1 00:22:10.787971 systemd-journald[178]: Journal stopped Nov 1 00:22:12.010218 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 1 00:22:12.010252 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:22:12.010265 kernel: SELinux: policy capability open_perms=1 Nov 1 00:22:12.010275 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:22:12.010287 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:22:12.010296 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:22:12.010306 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:22:12.010315 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:22:12.010324 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:22:12.010333 kernel: audit: type=1403 audit(1761956530.934:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:22:12.010640 systemd[1]: Successfully loaded SELinux policy in 59.400ms. Nov 1 00:22:12.010667 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.662ms. Nov 1 00:22:12.010679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:22:12.010690 systemd[1]: Detected virtualization kvm. Nov 1 00:22:12.010700 systemd[1]: Detected architecture x86-64. Nov 1 00:22:12.010710 systemd[1]: Detected first boot. Nov 1 00:22:12.010723 systemd[1]: Initializing machine ID from random generator. Nov 1 00:22:12.010733 zram_generator::config[1042]: No configuration found. Nov 1 00:22:12.010743 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:22:12.010753 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:22:12.010763 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:22:12.010773 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:12.010783 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:22:12.010796 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:22:12.010806 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:22:12.010816 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:22:12.010826 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:22:12.010836 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:22:12.010846 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:22:12.010856 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:22:12.010868 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:12.010879 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:12.010890 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:22:12.010900 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:22:12.010910 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:22:12.010919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:22:12.010929 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:22:12.010939 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:12.010951 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:22:12.010961 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:22:12.010974 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:22:12.010985 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:22:12.010995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:12.011005 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:22:12.011015 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:22:12.011025 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:22:12.011038 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:22:12.011048 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:22:12.011058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:12.011068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:12.011078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:12.011092 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:22:12.011102 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:22:12.011113 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:22:12.011123 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:22:12.011133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:12.011143 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:22:12.011153 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:22:12.011163 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:22:12.011176 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:22:12.011186 systemd[1]: Reached target machines.target - Containers. Nov 1 00:22:12.011196 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:22:12.011206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:22:12.011217 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:22:12.011227 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:22:12.011237 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:22:12.011247 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:22:12.011260 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:22:12.011270 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:22:12.011280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:22:12.011291 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:22:12.011301 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:22:12.011311 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:22:12.011321 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:22:12.011332 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:22:12.011344 kernel: fuse: init (API version 7.39) Nov 1 00:22:12.011403 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:22:12.011417 kernel: ACPI: bus type drm_connector registered Nov 1 00:22:12.011427 kernel: loop: module loaded Nov 1 00:22:12.011437 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:22:12.011448 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:22:12.011458 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:22:12.011491 systemd-journald[1129]: Collecting audit messages is disabled. Nov 1 00:22:12.011517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:22:12.011528 systemd-journald[1129]: Journal started Nov 1 00:22:12.011547 systemd-journald[1129]: Runtime Journal (/run/log/journal/ab319cbf151449a9a7ee21683f996d0f) is 8.0M, max 78.3M, 70.3M free. Nov 1 00:22:11.597278 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:22:11.614159 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 00:22:11.615088 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:22:12.018769 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:22:12.018798 systemd[1]: Stopped verity-setup.service. Nov 1 00:22:12.026389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:12.031401 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:22:12.032182 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:22:12.033342 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:22:12.034515 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:22:12.035612 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:22:12.036698 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:22:12.037801 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:22:12.039076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:22:12.040598 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:12.042127 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:22:12.042409 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:22:12.044332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:12.044566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:22:12.046077 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:12.046392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:22:12.048011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:12.048260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:22:12.049895 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:22:12.050151 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:22:12.051548 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:12.051811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:22:12.053249 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:12.054971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:22:12.056934 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:22:12.100746 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:22:12.109956 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:22:12.117573 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:22:12.118870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:22:12.118961 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:22:12.120945 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:22:12.125664 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:22:12.135571 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:22:12.136609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:22:12.141648 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:22:12.148723 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:22:12.149756 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:12.157042 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:22:12.159038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:22:12.175432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:22:12.184621 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:22:12.188905 systemd-journald[1129]: Time spent on flushing to /var/log/journal/ab319cbf151449a9a7ee21683f996d0f is 116.093ms for 974 entries. Nov 1 00:22:12.188905 systemd-journald[1129]: System Journal (/var/log/journal/ab319cbf151449a9a7ee21683f996d0f) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:22:12.346730 systemd-journald[1129]: Received client request to flush runtime journal. Nov 1 00:22:12.346783 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:22:12.346813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:22:12.195521 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:22:12.199758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:12.204721 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:22:12.207002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:22:12.208294 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:22:12.230905 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:22:12.244740 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:22:12.245906 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:22:12.257495 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:22:12.274890 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:22:12.294918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:12.344592 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:22:12.347463 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:22:12.350649 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:22:12.351889 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Nov 1 00:22:12.351903 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Nov 1 00:22:12.367694 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:12.379818 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:22:12.386438 kernel: loop1: detected capacity change from 0 to 140768 Nov 1 00:22:12.437697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:22:12.449490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:22:12.455407 kernel: loop2: detected capacity change from 0 to 142488 Nov 1 00:22:12.491167 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 1 00:22:12.493436 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 1 00:22:12.516388 kernel: loop3: detected capacity change from 0 to 8 Nov 1 00:22:12.517641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:12.545399 kernel: loop4: detected capacity change from 0 to 224512 Nov 1 00:22:12.576557 kernel: loop5: detected capacity change from 0 to 140768 Nov 1 00:22:12.603329 kernel: loop6: detected capacity change from 0 to 142488 Nov 1 00:22:12.630384 kernel: loop7: detected capacity change from 0 to 8 Nov 1 00:22:12.633120 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 1 00:22:12.633835 (sd-merge)[1190]: Merged extensions into '/usr'. Nov 1 00:22:12.639770 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:22:12.639874 systemd[1]: Reloading... Nov 1 00:22:12.777435 zram_generator::config[1216]: No configuration found. Nov 1 00:22:12.815808 ldconfig[1157]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:22:12.886534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:12.932677 systemd[1]: Reloading finished in 292 ms. Nov 1 00:22:12.965661 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:22:12.967508 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:22:12.969078 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:22:12.980576 systemd[1]: Starting ensure-sysext.service... Nov 1 00:22:12.984688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:22:12.989637 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:13.001499 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:22:13.001524 systemd[1]: Reloading... Nov 1 00:22:13.015818 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:22:13.016179 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:22:13.021220 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:22:13.022280 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 1 00:22:13.022432 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 1 00:22:13.028145 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:22:13.028168 systemd-tmpfiles[1261]: Skipping /boot Nov 1 00:22:13.045606 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Nov 1 00:22:13.051959 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:22:13.051974 systemd-tmpfiles[1261]: Skipping /boot Nov 1 00:22:13.128576 zram_generator::config[1298]: No configuration found. Nov 1 00:22:13.289090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:13.305396 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1300) Nov 1 00:22:13.349402 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 00:22:13.365379 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:22:13.365711 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:22:13.373375 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:22:13.381430 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:22:13.381759 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:22:13.391418 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:22:13.396899 systemd[1]: Reloading finished in 394 ms. Nov 1 00:22:13.418070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:13.420820 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:13.451480 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:22:13.475451 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:22:13.484846 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:13.493139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:13.496851 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:22:13.498621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:22:13.503767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:22:13.506669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:22:13.510591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:22:13.516894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:22:13.524799 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:22:13.531695 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:22:13.540304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:22:13.547531 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:22:13.556636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:13.558905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:13.563890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:13.564423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:22:13.567014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:13.567202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:22:13.570633 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:13.570807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:22:13.591612 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:22:13.594063 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:22:13.603565 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:13.604017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:22:13.611618 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:22:13.621052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:22:13.626170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:22:13.631090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:22:13.636419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:22:13.640548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:22:13.647692 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:22:13.651523 augenrules[1398]: No rules Nov 1 00:22:13.652950 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:22:13.654081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:13.657730 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:13.659012 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:13.660104 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:22:13.662244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:13.663642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:22:13.665111 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:13.665291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:22:13.673157 systemd[1]: Finished ensure-sysext.service. Nov 1 00:22:13.674920 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:22:13.688774 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:13.688985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:22:13.690776 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:22:13.697558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:22:13.706492 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:22:13.707978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:13.708511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:22:13.710061 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:22:13.715999 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:13.722002 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:22:13.729670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:13.851598 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:22:13.852919 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:22:13.856441 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:22:13.857761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:13.859883 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:22:13.869941 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:13.885584 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:13.904665 systemd-resolved[1378]: Positive Trust Anchors: Nov 1 00:22:13.905001 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:13.905082 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:22:13.909343 systemd-resolved[1378]: Defaulting to hostname 'linux'. Nov 1 00:22:13.911123 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:22:13.912188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:13.915117 systemd-networkd[1377]: lo: Link UP Nov 1 00:22:13.915137 systemd-networkd[1377]: lo: Gained carrier Nov 1 00:22:13.919221 systemd-networkd[1377]: Enumeration completed Nov 1 00:22:13.919541 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:22:13.921037 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:13.921055 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:13.922003 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:22:13.924260 systemd[1]: Reached target network.target - Network. Nov 1 00:22:13.927020 systemd-networkd[1377]: eth0: Link UP Nov 1 00:22:13.927038 systemd-networkd[1377]: eth0: Gained carrier Nov 1 00:22:13.927053 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:13.932556 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:22:13.934155 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:22:13.935250 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:22:13.936304 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:22:13.937554 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:22:13.938555 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:22:13.939533 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:22:13.939577 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:22:13.940408 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:22:13.941718 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:22:13.942739 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:22:13.943727 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:22:13.946507 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:22:13.949389 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:22:13.961688 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:22:13.963153 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:22:13.964184 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:22:13.965048 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:22:13.966111 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:22:13.966167 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:22:13.971462 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:22:13.974553 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:22:13.982703 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:22:13.985471 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:22:13.997492 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:22:13.998417 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:22:14.002504 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:22:14.009750 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:22:14.017279 jq[1441]: false Nov 1 00:22:14.018599 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:22:14.022827 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:22:14.036896 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:22:14.039934 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:22:14.042597 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:22:14.048904 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:22:14.052518 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:22:14.057933 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:22:14.058181 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:22:14.058627 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:22:14.058836 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:22:14.062206 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:22:14.062684 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:22:14.072806 extend-filesystems[1442]: Found loop4 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found loop5 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found loop6 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found loop7 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda1 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda2 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda3 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found usr Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda4 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda6 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda7 Nov 1 00:22:14.072806 extend-filesystems[1442]: Found sda9 Nov 1 00:22:14.072806 extend-filesystems[1442]: Checking size of /dev/sda9 Nov 1 00:22:14.071935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:22:14.071747 dbus-daemon[1440]: [system] SELinux support is enabled Nov 1 00:22:14.129912 update_engine[1457]: I20251101 00:22:14.103998 1457 main.cc:92] Flatcar Update Engine starting Nov 1 00:22:14.129912 update_engine[1457]: I20251101 00:22:14.119506 1457 update_check_scheduler.cc:74] Next update check in 4m20s Nov 1 00:22:14.093291 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:22:14.130185 jq[1458]: true Nov 1 00:22:14.093960 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:22:14.096152 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:22:14.096177 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:22:14.119752 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:22:14.131528 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:22:14.145569 jq[1471]: true Nov 1 00:22:14.159990 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:22:14.169099 extend-filesystems[1442]: Resized partition /dev/sda9 Nov 1 00:22:14.173466 coreos-metadata[1439]: Nov 01 00:22:14.172 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 1 00:22:14.177933 tar[1466]: linux-amd64/LICENSE Nov 1 00:22:14.177933 tar[1466]: linux-amd64/helm Nov 1 00:22:14.180838 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:22:14.186183 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 1 00:22:14.190933 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:22:14.190978 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:22:14.195695 systemd-logind[1450]: New seat seat0. Nov 1 00:22:14.197243 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:22:14.272457 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1293) Nov 1 00:22:14.292827 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:22:14.304418 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:22:14.329778 systemd[1]: Starting sshkeys.service... Nov 1 00:22:14.366216 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:22:14.376642 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:22:14.391466 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:22:14.440849 coreos-metadata[1506]: Nov 01 00:22:14.438 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 1 00:22:14.450498 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:22:14.521162 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:22:14.532520 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:22:14.544312 containerd[1475]: time="2025-11-01T00:22:14.544228930Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:22:14.568414 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 1 00:22:14.571409 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:22:14.571656 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:22:14.573451 containerd[1475]: time="2025-11-01T00:22:14.573231550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:14.577288 containerd[1475]: time="2025-11-01T00:22:14.577213010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:14.583444 containerd[1475]: time="2025-11-01T00:22:14.577310200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:22:14.583444 containerd[1475]: time="2025-11-01T00:22:14.577458070Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:22:14.582281 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:22:14.584723 containerd[1475]: time="2025-11-01T00:22:14.584503490Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:22:14.584723 containerd[1475]: time="2025-11-01T00:22:14.584537330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:14.584723 containerd[1475]: time="2025-11-01T00:22:14.584611260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:14.584723 containerd[1475]: time="2025-11-01T00:22:14.584626990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:14.585501 containerd[1475]: time="2025-11-01T00:22:14.585212370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:14.585700 containerd[1475]: time="2025-11-01T00:22:14.585617840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:14.586592 containerd[1475]: time="2025-11-01T00:22:14.585790550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:14.586592 containerd[1475]: time="2025-11-01T00:22:14.586428930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:14.586655 extend-filesystems[1482]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 00:22:14.586655 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 1 00:22:14.586655 extend-filesystems[1482]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 1 00:22:14.640808 extend-filesystems[1442]: Resized filesystem in /dev/sda9 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.588044400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.590139220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.590560710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.590578950Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.590682230Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.590737630Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.632419600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.632850620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.632878970Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.632897430Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.632911460Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.633099050Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:22:14.643239 containerd[1475]: time="2025-11-01T00:22:14.633336970Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:22:14.591187 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633503580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633522040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633534100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633546780Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633558780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633570180Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633583690Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633597710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633610040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633621620Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633633070Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633653980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633666240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.644327 containerd[1475]: time="2025-11-01T00:22:14.633677270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.592868 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633689560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633700930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633716950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633727490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633806400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633820530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633839680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633851320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633862200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633872620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633885810Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633903860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633913980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648260 containerd[1475]: time="2025-11-01T00:22:14.633925160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:22:14.626282 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.633965200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.633982120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.633992510Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.634003010Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.634012500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.634023720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.634038190Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:22:14.648631 containerd[1475]: time="2025-11-01T00:22:14.634053020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:22:14.644851 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.634295830Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.634351770Z" level=info msg="Connect containerd service" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.635649340Z" level=info msg="using legacy CRI server" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.635663700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.635785270Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.638875470Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.639055530Z" level=info msg="Start subscribing containerd event" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.639125620Z" level=info msg="Start recovering state" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.639194690Z" level=info msg="Start event monitor" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.639205730Z" level=info msg="Start snapshots syncer" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.639216280Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.639231420Z" level=info msg="Start streaming server" Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.640068830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.640225110Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:22:14.648834 containerd[1475]: time="2025-11-01T00:22:14.641743640Z" level=info msg="containerd successfully booted in 0.099640s" Nov 1 00:22:14.654677 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:22:14.655799 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:22:14.657880 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:22:14.707439 systemd-networkd[1377]: eth0: DHCPv4 address 172.237.159.149/24, gateway 172.237.159.1 acquired from 23.205.167.117 Nov 1 00:22:14.709507 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Nov 1 00:22:14.711582 dbus-daemon[1440]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1377 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:22:14.725510 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 00:22:14.797190 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:22:14.797541 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 00:22:14.799227 dbus-daemon[1440]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1537 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:22:14.807725 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 00:22:14.822485 polkitd[1538]: Started polkitd version 121 Nov 1 00:22:14.825859 polkitd[1538]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:22:14.825916 polkitd[1538]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:22:14.826594 polkitd[1538]: Finished loading, compiling and executing 2 rules Nov 1 00:22:14.827508 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:22:14.827654 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 00:22:14.829229 polkitd[1538]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:22:14.843181 systemd-hostnamed[1537]: Hostname set to <172-237-159-149> (transient) Nov 1 00:22:14.843663 systemd-resolved[1378]: System hostname changed to '172-237-159-149'. Nov 1 00:22:16.420337 systemd-timesyncd[1416]: Contacted time server 50.18.44.198:123 (0.flatcar.pool.ntp.org). Nov 1 00:22:16.420396 systemd-timesyncd[1416]: Initial clock synchronization to Sat 2025-11-01 00:22:16.419552 UTC. Nov 1 00:22:16.420869 systemd-resolved[1378]: Clock change detected. Flushing caches. Nov 1 00:22:16.520637 tar[1466]: linux-amd64/README.md Nov 1 00:22:16.533702 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:22:16.721595 coreos-metadata[1439]: Nov 01 00:22:16.721 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 1 00:22:16.815220 coreos-metadata[1439]: Nov 01 00:22:16.814 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 1 00:22:16.989243 coreos-metadata[1506]: Nov 01 00:22:16.989 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 1 00:22:16.997142 coreos-metadata[1439]: Nov 01 00:22:16.997 INFO Fetch successful Nov 1 00:22:16.997142 coreos-metadata[1439]: Nov 01 00:22:16.997 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 1 00:22:17.080957 coreos-metadata[1506]: Nov 01 00:22:17.080 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 1 00:22:17.218153 coreos-metadata[1506]: Nov 01 00:22:17.218 INFO Fetch successful Nov 1 00:22:17.239536 update-ssh-keys[1555]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:22:17.241079 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:22:17.243910 systemd[1]: Finished sshkeys.service. Nov 1 00:22:17.251777 systemd-networkd[1377]: eth0: Gained IPv6LL Nov 1 00:22:17.256045 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:22:17.257761 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:22:17.260768 coreos-metadata[1439]: Nov 01 00:22:17.260 INFO Fetch successful Nov 1 00:22:17.265938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:17.270703 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:22:17.311389 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:22:17.369161 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:22:17.370719 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:22:18.231693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:18.233287 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:22:18.235620 systemd[1]: Startup finished in 1.056s (kernel) + 8.197s (initrd) + 5.821s (userspace) = 15.076s. Nov 1 00:22:18.237934 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:18.775038 kubelet[1593]: E1101 00:22:18.774692 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:18.780056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:18.780274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:19.229458 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:22:19.235787 systemd[1]: Started sshd@0-172.237.159.149:22-139.178.68.195:39540.service - OpenSSH per-connection server daemon (139.178.68.195:39540). Nov 1 00:22:19.561559 sshd[1606]: Accepted publickey for core from 139.178.68.195 port 39540 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:22:19.563384 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:19.573325 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:22:19.582842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:22:19.585219 systemd-logind[1450]: New session 1 of user core. Nov 1 00:22:19.603230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:22:19.609733 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:22:19.621322 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:19.723633 systemd[1610]: Queued start job for default target default.target. Nov 1 00:22:19.735804 systemd[1610]: Created slice app.slice - User Application Slice. Nov 1 00:22:19.735840 systemd[1610]: Reached target paths.target - Paths. Nov 1 00:22:19.735855 systemd[1610]: Reached target timers.target - Timers. Nov 1 00:22:19.737523 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:22:19.750561 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:22:19.750687 systemd[1610]: Reached target sockets.target - Sockets. Nov 1 00:22:19.750721 systemd[1610]: Reached target basic.target - Basic System. Nov 1 00:22:19.750770 systemd[1610]: Reached target default.target - Main User Target. Nov 1 00:22:19.750809 systemd[1610]: Startup finished in 121ms. Nov 1 00:22:19.750943 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:22:19.753249 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:22:20.015760 systemd[1]: Started sshd@1-172.237.159.149:22-139.178.68.195:39552.service - OpenSSH per-connection server daemon (139.178.68.195:39552). Nov 1 00:22:20.337863 sshd[1621]: Accepted publickey for core from 139.178.68.195 port 39552 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:22:20.339328 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:20.346780 systemd-logind[1450]: New session 2 of user core. Nov 1 00:22:20.357632 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:22:20.586313 sshd[1621]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:20.591388 systemd[1]: sshd@1-172.237.159.149:22-139.178.68.195:39552.service: Deactivated successfully. Nov 1 00:22:20.593719 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:22:20.595386 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:22:20.596941 systemd-logind[1450]: Removed session 2. Nov 1 00:22:20.649776 systemd[1]: Started sshd@2-172.237.159.149:22-139.178.68.195:39554.service - OpenSSH per-connection server daemon (139.178.68.195:39554). Nov 1 00:22:20.970410 sshd[1628]: Accepted publickey for core from 139.178.68.195 port 39554 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:22:20.972338 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:20.978438 systemd-logind[1450]: New session 3 of user core. Nov 1 00:22:20.984849 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:22:21.217635 sshd[1628]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:21.223106 systemd[1]: sshd@2-172.237.159.149:22-139.178.68.195:39554.service: Deactivated successfully. Nov 1 00:22:21.225473 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:22:21.226214 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:22:21.227791 systemd-logind[1450]: Removed session 3. Nov 1 00:22:21.276655 systemd[1]: Started sshd@3-172.237.159.149:22-139.178.68.195:39556.service - OpenSSH per-connection server daemon (139.178.68.195:39556). Nov 1 00:22:21.614106 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 39556 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:22:21.615879 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:21.621141 systemd-logind[1450]: New session 4 of user core. Nov 1 00:22:21.627810 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:22:21.866867 sshd[1635]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:21.871263 systemd[1]: sshd@3-172.237.159.149:22-139.178.68.195:39556.service: Deactivated successfully. Nov 1 00:22:21.873084 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:22:21.873999 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:22:21.875061 systemd-logind[1450]: Removed session 4. Nov 1 00:22:21.926833 systemd[1]: Started sshd@4-172.237.159.149:22-139.178.68.195:39560.service - OpenSSH per-connection server daemon (139.178.68.195:39560). Nov 1 00:22:22.262153 sshd[1642]: Accepted publickey for core from 139.178.68.195 port 39560 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:22:22.264099 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:22.269046 systemd-logind[1450]: New session 5 of user core. Nov 1 00:22:22.272622 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:22:22.470221 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:22:22.470615 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:22.490652 sudo[1645]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:22.542814 sshd[1642]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:22.547635 systemd[1]: sshd@4-172.237.159.149:22-139.178.68.195:39560.service: Deactivated successfully. Nov 1 00:22:22.550453 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:22:22.552361 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:22:22.554140 systemd-logind[1450]: Removed session 5. Nov 1 00:22:22.609611 systemd[1]: Started sshd@5-172.237.159.149:22-139.178.68.195:39566.service - OpenSSH per-connection server daemon (139.178.68.195:39566). Nov 1 00:22:22.933797 sshd[1650]: Accepted publickey for core from 139.178.68.195 port 39566 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:22:22.935335 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:22.939995 systemd-logind[1450]: New session 6 of user core. Nov 1 00:22:22.959617 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:22:23.129743 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:22:23.130108 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:23.134319 sudo[1654]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:23.140543 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:22:23.140889 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:23.159789 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:23.161352 auditctl[1657]: No rules Nov 1 00:22:23.161865 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:22:23.162155 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:23.173237 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:23.205528 augenrules[1675]: No rules Nov 1 00:22:23.207295 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:23.209091 sudo[1653]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:23.259526 sshd[1650]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:23.262699 systemd[1]: sshd@5-172.237.159.149:22-139.178.68.195:39566.service: Deactivated successfully. Nov 1 00:22:23.264965 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:22:23.266260 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:22:23.267823 systemd-logind[1450]: Removed session 6. Nov 1 00:22:23.320317 systemd[1]: Started sshd@6-172.237.159.149:22-139.178.68.195:38978.service - OpenSSH per-connection server daemon (139.178.68.195:38978). Nov 1 00:22:23.650736 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 38978 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:22:23.652876 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:23.658603 systemd-logind[1450]: New session 7 of user core. Nov 1 00:22:23.665687 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:22:23.851627 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:22:23.852032 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:24.126715 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:22:24.128975 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:22:24.387429 dockerd[1702]: time="2025-11-01T00:22:24.387283525Z" level=info msg="Starting up" Nov 1 00:22:24.492273 dockerd[1702]: time="2025-11-01T00:22:24.492238625Z" level=info msg="Loading containers: start." Nov 1 00:22:24.602518 kernel: Initializing XFRM netlink socket Nov 1 00:22:24.694136 systemd-networkd[1377]: docker0: Link UP Nov 1 00:22:24.711995 dockerd[1702]: time="2025-11-01T00:22:24.711954225Z" level=info msg="Loading containers: done." Nov 1 00:22:24.725348 dockerd[1702]: time="2025-11-01T00:22:24.724954755Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:22:24.725348 dockerd[1702]: time="2025-11-01T00:22:24.725019475Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:22:24.725348 dockerd[1702]: time="2025-11-01T00:22:24.725118765Z" level=info msg="Daemon has completed initialization" Nov 1 00:22:24.757523 dockerd[1702]: time="2025-11-01T00:22:24.757177475Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:22:24.757423 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:22:25.955776 containerd[1475]: time="2025-11-01T00:22:25.955472985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:22:27.072795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550357664.mount: Deactivated successfully. Nov 1 00:22:28.268312 containerd[1475]: time="2025-11-01T00:22:28.268247165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:28.269724 containerd[1475]: time="2025-11-01T00:22:28.269667415Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 00:22:28.270463 containerd[1475]: time="2025-11-01T00:22:28.270356735Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:28.273245 containerd[1475]: time="2025-11-01T00:22:28.273201365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:28.274714 containerd[1475]: time="2025-11-01T00:22:28.274522815Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.31896399s" Nov 1 00:22:28.274714 containerd[1475]: time="2025-11-01T00:22:28.274566315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:22:28.275289 containerd[1475]: time="2025-11-01T00:22:28.275257565Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:22:28.946705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:28.951667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:29.106377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:29.118986 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:29.161728 kubelet[1908]: E1101 00:22:29.161679 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:29.168165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:29.168377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:30.037948 containerd[1475]: time="2025-11-01T00:22:30.037881695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:30.039040 containerd[1475]: time="2025-11-01T00:22:30.038997725Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 00:22:30.039754 containerd[1475]: time="2025-11-01T00:22:30.039690945Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:30.042369 containerd[1475]: time="2025-11-01T00:22:30.042342825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:30.043655 containerd[1475]: time="2025-11-01T00:22:30.043511465Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.76818427s" Nov 1 00:22:30.043655 containerd[1475]: time="2025-11-01T00:22:30.043543895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:22:30.045303 containerd[1475]: time="2025-11-01T00:22:30.045270425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:22:31.645729 containerd[1475]: time="2025-11-01T00:22:31.645606105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.647136 containerd[1475]: time="2025-11-01T00:22:31.647090105Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 00:22:31.648304 containerd[1475]: time="2025-11-01T00:22:31.647860175Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.650926 containerd[1475]: time="2025-11-01T00:22:31.650885995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.652441 containerd[1475]: time="2025-11-01T00:22:31.652402035Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.60703204s" Nov 1 00:22:31.652703 containerd[1475]: time="2025-11-01T00:22:31.652444355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:22:31.655202 containerd[1475]: time="2025-11-01T00:22:31.655159765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:22:33.188750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406665746.mount: Deactivated successfully. Nov 1 00:22:33.547555 containerd[1475]: time="2025-11-01T00:22:33.547508695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:33.548579 containerd[1475]: time="2025-11-01T00:22:33.548532255Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 00:22:33.548579 containerd[1475]: time="2025-11-01T00:22:33.548559585Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:33.550557 containerd[1475]: time="2025-11-01T00:22:33.550517005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:33.551432 containerd[1475]: time="2025-11-01T00:22:33.551203555Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.89600545s" Nov 1 00:22:33.551432 containerd[1475]: time="2025-11-01T00:22:33.551233175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:22:33.552040 containerd[1475]: time="2025-11-01T00:22:33.552007205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:22:34.404282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820191743.mount: Deactivated successfully. Nov 1 00:22:35.127928 containerd[1475]: time="2025-11-01T00:22:35.127854375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:35.128963 containerd[1475]: time="2025-11-01T00:22:35.128912445Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 00:22:35.129901 containerd[1475]: time="2025-11-01T00:22:35.129526555Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:35.132713 containerd[1475]: time="2025-11-01T00:22:35.132691055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:35.133940 containerd[1475]: time="2025-11-01T00:22:35.133908425Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.58186942s" Nov 1 00:22:35.133993 containerd[1475]: time="2025-11-01T00:22:35.133945505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:22:35.135134 containerd[1475]: time="2025-11-01T00:22:35.135101165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:22:35.983934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894663348.mount: Deactivated successfully. Nov 1 00:22:35.988649 containerd[1475]: time="2025-11-01T00:22:35.988594815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:35.989688 containerd[1475]: time="2025-11-01T00:22:35.989639395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:22:35.990455 containerd[1475]: time="2025-11-01T00:22:35.990405295Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:35.992468 containerd[1475]: time="2025-11-01T00:22:35.992422665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:35.993648 containerd[1475]: time="2025-11-01T00:22:35.993251185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 858.05729ms" Nov 1 00:22:35.993648 containerd[1475]: time="2025-11-01T00:22:35.993283755Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:22:35.994180 containerd[1475]: time="2025-11-01T00:22:35.994124495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:22:36.904531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054634452.mount: Deactivated successfully. Nov 1 00:22:38.432844 containerd[1475]: time="2025-11-01T00:22:38.432781565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:38.433860 containerd[1475]: time="2025-11-01T00:22:38.433819705Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 00:22:38.435925 containerd[1475]: time="2025-11-01T00:22:38.434831375Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:38.437405 containerd[1475]: time="2025-11-01T00:22:38.437365585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:38.438967 containerd[1475]: time="2025-11-01T00:22:38.438716765Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.44455713s" Nov 1 00:22:38.438967 containerd[1475]: time="2025-11-01T00:22:38.438750315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:22:39.196665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:22:39.205655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:39.371641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:39.380822 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:39.429698 kubelet[2069]: E1101 00:22:39.429654 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:39.433315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:39.433558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:40.731103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:40.738752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:40.772390 systemd[1]: Reloading requested from client PID 2083 ('systemctl') (unit session-7.scope)... Nov 1 00:22:40.772405 systemd[1]: Reloading... Nov 1 00:22:40.910509 zram_generator::config[2126]: No configuration found. Nov 1 00:22:41.037347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:41.114116 systemd[1]: Reloading finished in 341 ms. Nov 1 00:22:41.172210 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:22:41.172329 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:22:41.172858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:41.178742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:41.329922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:41.338982 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:41.386518 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:41.386518 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:41.386518 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:41.387078 kubelet[2177]: I1101 00:22:41.386526 2177 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:41.715407 kubelet[2177]: I1101 00:22:41.715351 2177 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:41.715407 kubelet[2177]: I1101 00:22:41.715386 2177 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:41.715671 kubelet[2177]: I1101 00:22:41.715639 2177 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:41.746993 kubelet[2177]: E1101 00:22:41.746929 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.159.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:41.748586 kubelet[2177]: I1101 00:22:41.748229 2177 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:41.758006 kubelet[2177]: E1101 00:22:41.757954 2177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:41.758006 kubelet[2177]: I1101 00:22:41.757990 2177 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:41.762455 kubelet[2177]: I1101 00:22:41.762403 2177 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:41.764302 kubelet[2177]: I1101 00:22:41.764250 2177 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:41.764457 kubelet[2177]: I1101 00:22:41.764295 2177 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-159-149","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:41.764579 kubelet[2177]: I1101 00:22:41.764459 2177 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:41.764579 kubelet[2177]: I1101 00:22:41.764472 2177 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:41.764665 kubelet[2177]: I1101 00:22:41.764641 2177 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:41.768413 kubelet[2177]: I1101 00:22:41.768393 2177 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:41.768459 kubelet[2177]: I1101 00:22:41.768424 2177 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:41.768459 kubelet[2177]: I1101 00:22:41.768446 2177 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:41.768459 kubelet[2177]: I1101 00:22:41.768456 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:41.773989 kubelet[2177]: W1101 00:22:41.773867 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.159.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:41.773989 kubelet[2177]: E1101 00:22:41.773928 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.159.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:41.774971 kubelet[2177]: W1101 00:22:41.774917 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.159.149:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-159-149&limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:41.775129 kubelet[2177]: E1101 00:22:41.775059 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.159.149:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-159-149&limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:41.775396 kubelet[2177]: I1101 00:22:41.775320 2177 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:41.776101 kubelet[2177]: I1101 00:22:41.776085 2177 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:41.777212 kubelet[2177]: W1101 00:22:41.777176 2177 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:22:41.780896 kubelet[2177]: I1101 00:22:41.780693 2177 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:41.780896 kubelet[2177]: I1101 00:22:41.780724 2177 server.go:1287] "Started kubelet" Nov 1 00:22:41.788386 kubelet[2177]: I1101 00:22:41.788267 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:41.790131 kubelet[2177]: E1101 00:22:41.788872 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.159.149:6443/api/v1/namespaces/default/events\": dial tcp 172.237.159.149:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-159-149.1873ba285e9b8f03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-159-149,UID:172-237-159-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-159-149,},FirstTimestamp:2025-11-01 00:22:41.780707075 +0000 UTC m=+0.437464321,LastTimestamp:2025-11-01 00:22:41.780707075 +0000 UTC m=+0.437464321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-159-149,}" Nov 1 00:22:41.793300 kubelet[2177]: I1101 00:22:41.793249 2177 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:41.795006 kubelet[2177]: I1101 00:22:41.794991 2177 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:41.797406 kubelet[2177]: I1101 00:22:41.795322 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:41.798826 kubelet[2177]: I1101 00:22:41.798795 2177 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:41.798867 kubelet[2177]: I1101 00:22:41.798062 2177 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:41.799479 kubelet[2177]: I1101 00:22:41.795454 2177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:41.799479 kubelet[2177]: I1101 00:22:41.798071 2177 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:41.799479 kubelet[2177]: I1101 00:22:41.799025 2177 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:41.799479 kubelet[2177]: E1101 00:22:41.798159 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-159-149\" not found" Nov 1 00:22:41.799479 kubelet[2177]: W1101 00:22:41.799376 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.159.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:41.799479 kubelet[2177]: E1101 00:22:41.799418 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.159.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:41.799479 kubelet[2177]: E1101 00:22:41.799421 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.159.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-159-149?timeout=10s\": dial tcp 172.237.159.149:6443: connect: connection refused" interval="200ms" Nov 1 00:22:41.800385 kubelet[2177]: I1101 00:22:41.800370 2177 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:41.800540 kubelet[2177]: I1101 00:22:41.800521 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:41.801756 kubelet[2177]: E1101 00:22:41.801740 2177 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:41.802312 kubelet[2177]: I1101 00:22:41.802298 2177 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:41.816344 kubelet[2177]: I1101 00:22:41.816317 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:41.818022 kubelet[2177]: I1101 00:22:41.817979 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:41.818022 kubelet[2177]: I1101 00:22:41.818012 2177 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:41.818099 kubelet[2177]: I1101 00:22:41.818033 2177 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:41.818099 kubelet[2177]: I1101 00:22:41.818041 2177 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:41.818144 kubelet[2177]: E1101 00:22:41.818091 2177 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:41.829132 kubelet[2177]: W1101 00:22:41.828989 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.159.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:41.829132 kubelet[2177]: E1101 00:22:41.829031 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.159.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:41.834905 kubelet[2177]: I1101 00:22:41.834689 2177 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:41.834905 kubelet[2177]: I1101 00:22:41.834702 2177 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:41.834905 kubelet[2177]: I1101 00:22:41.834718 2177 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:41.839638 kubelet[2177]: I1101 00:22:41.839620 2177 policy_none.go:49] "None policy: Start" Nov 1 00:22:41.839906 kubelet[2177]: I1101 00:22:41.839704 2177 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:41.839906 kubelet[2177]: I1101 00:22:41.839719 2177 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:41.847968 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:22:41.860046 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:22:41.864428 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:22:41.872302 kubelet[2177]: I1101 00:22:41.872267 2177 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:41.872459 kubelet[2177]: I1101 00:22:41.872431 2177 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:41.872525 kubelet[2177]: I1101 00:22:41.872452 2177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:41.872969 kubelet[2177]: I1101 00:22:41.872939 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:41.874314 kubelet[2177]: E1101 00:22:41.874281 2177 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:41.874362 kubelet[2177]: E1101 00:22:41.874320 2177 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-159-149\" not found" Nov 1 00:22:41.929723 systemd[1]: Created slice kubepods-burstable-podf16dd616470c49c4779dec1b68bf7bce.slice - libcontainer container kubepods-burstable-podf16dd616470c49c4779dec1b68bf7bce.slice. Nov 1 00:22:41.947237 kubelet[2177]: E1101 00:22:41.947202 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:41.952530 systemd[1]: Created slice kubepods-burstable-pod2e39adfb74a9afcfe7924a44f26ba4be.slice - libcontainer container kubepods-burstable-pod2e39adfb74a9afcfe7924a44f26ba4be.slice. Nov 1 00:22:41.958715 kubelet[2177]: E1101 00:22:41.958697 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:41.962229 systemd[1]: Created slice kubepods-burstable-pod792d6a04aad0afb682ea306a711ba247.slice - libcontainer container kubepods-burstable-pod792d6a04aad0afb682ea306a711ba247.slice. Nov 1 00:22:41.964445 kubelet[2177]: E1101 00:22:41.964419 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:41.974821 kubelet[2177]: I1101 00:22:41.974682 2177 kubelet_node_status.go:75] "Attempting to register node" node="172-237-159-149" Nov 1 00:22:41.976714 kubelet[2177]: E1101 00:22:41.976685 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.159.149:6443/api/v1/nodes\": dial tcp 172.237.159.149:6443: connect: connection refused" node="172-237-159-149" Nov 1 00:22:42.000224 kubelet[2177]: E1101 00:22:42.000166 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.159.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-159-149?timeout=10s\": dial tcp 172.237.159.149:6443: connect: connection refused" interval="400ms" Nov 1 00:22:42.100939 kubelet[2177]: I1101 00:22:42.100792 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f16dd616470c49c4779dec1b68bf7bce-ca-certs\") pod \"kube-apiserver-172-237-159-149\" (UID: \"f16dd616470c49c4779dec1b68bf7bce\") " pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:42.100939 kubelet[2177]: I1101 00:22:42.100826 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-ca-certs\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:42.100939 kubelet[2177]: I1101 00:22:42.100844 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-k8s-certs\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:42.100939 kubelet[2177]: I1101 00:22:42.100859 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:42.100939 kubelet[2177]: I1101 00:22:42.100880 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/792d6a04aad0afb682ea306a711ba247-kubeconfig\") pod \"kube-scheduler-172-237-159-149\" (UID: \"792d6a04aad0afb682ea306a711ba247\") " pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:42.101081 kubelet[2177]: I1101 00:22:42.100894 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f16dd616470c49c4779dec1b68bf7bce-k8s-certs\") pod \"kube-apiserver-172-237-159-149\" (UID: \"f16dd616470c49c4779dec1b68bf7bce\") " pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:42.101081 kubelet[2177]: I1101 00:22:42.100911 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f16dd616470c49c4779dec1b68bf7bce-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-159-149\" (UID: \"f16dd616470c49c4779dec1b68bf7bce\") " pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:42.101081 kubelet[2177]: I1101 00:22:42.100993 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-flexvolume-dir\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:42.101081 kubelet[2177]: I1101 00:22:42.101064 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-kubeconfig\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:42.178992 kubelet[2177]: I1101 00:22:42.178936 2177 kubelet_node_status.go:75] "Attempting to register node" node="172-237-159-149" Nov 1 00:22:42.179420 kubelet[2177]: E1101 00:22:42.179354 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.159.149:6443/api/v1/nodes\": dial tcp 172.237.159.149:6443: connect: connection refused" node="172-237-159-149" Nov 1 00:22:42.248000 kubelet[2177]: E1101 00:22:42.247822 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:42.249151 containerd[1475]: time="2025-11-01T00:22:42.248970715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-159-149,Uid:f16dd616470c49c4779dec1b68bf7bce,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:42.260203 kubelet[2177]: E1101 00:22:42.259967 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:42.261455 containerd[1475]: time="2025-11-01T00:22:42.261375045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-159-149,Uid:2e39adfb74a9afcfe7924a44f26ba4be,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:42.267519 kubelet[2177]: E1101 00:22:42.265366 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:42.268604 containerd[1475]: time="2025-11-01T00:22:42.268557575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-159-149,Uid:792d6a04aad0afb682ea306a711ba247,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:42.401719 kubelet[2177]: E1101 00:22:42.401641 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.159.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-159-149?timeout=10s\": dial tcp 172.237.159.149:6443: connect: connection refused" interval="800ms" Nov 1 00:22:42.581798 kubelet[2177]: I1101 00:22:42.581664 2177 kubelet_node_status.go:75] "Attempting to register node" node="172-237-159-149" Nov 1 00:22:42.582046 kubelet[2177]: E1101 00:22:42.582001 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.159.149:6443/api/v1/nodes\": dial tcp 172.237.159.149:6443: connect: connection refused" node="172-237-159-149" Nov 1 00:22:42.699586 kubelet[2177]: W1101 00:22:42.699423 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.159.149:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-159-149&limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:42.699586 kubelet[2177]: E1101 00:22:42.699577 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.159.149:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-159-149&limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:42.824252 kubelet[2177]: W1101 00:22:42.824177 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.159.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:42.824436 kubelet[2177]: E1101 00:22:42.824270 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.159.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:42.950042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291827609.mount: Deactivated successfully. Nov 1 00:22:42.954963 containerd[1475]: time="2025-11-01T00:22:42.954926215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:42.956006 containerd[1475]: time="2025-11-01T00:22:42.955958815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:42.956432 containerd[1475]: time="2025-11-01T00:22:42.956403875Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:42.957086 containerd[1475]: time="2025-11-01T00:22:42.957061015Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:42.958120 containerd[1475]: time="2025-11-01T00:22:42.958038455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:22:42.958742 containerd[1475]: time="2025-11-01T00:22:42.958646875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:42.958742 containerd[1475]: time="2025-11-01T00:22:42.958699725Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:42.961465 containerd[1475]: time="2025-11-01T00:22:42.961434965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:42.964503 containerd[1475]: time="2025-11-01T00:22:42.962969535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.34149ms" Nov 1 00:22:42.964564 containerd[1475]: time="2025-11-01T00:22:42.964473395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.99016ms" Nov 1 00:22:42.965286 containerd[1475]: time="2025-11-01T00:22:42.965266665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.19916ms" Nov 1 00:22:43.053077 containerd[1475]: time="2025-11-01T00:22:43.052325865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:43.053077 containerd[1475]: time="2025-11-01T00:22:43.052368055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:43.053077 containerd[1475]: time="2025-11-01T00:22:43.052386335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.053959 containerd[1475]: time="2025-11-01T00:22:43.052475625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.057428 containerd[1475]: time="2025-11-01T00:22:43.055778665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:43.057428 containerd[1475]: time="2025-11-01T00:22:43.055830055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:43.057428 containerd[1475]: time="2025-11-01T00:22:43.055843495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.057428 containerd[1475]: time="2025-11-01T00:22:43.055922475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.059841 containerd[1475]: time="2025-11-01T00:22:43.059309685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:43.059841 containerd[1475]: time="2025-11-01T00:22:43.059347395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:43.059841 containerd[1475]: time="2025-11-01T00:22:43.059360485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.059841 containerd[1475]: time="2025-11-01T00:22:43.059424655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.084646 systemd[1]: Started cri-containerd-34fc6bafe238a2718b7dd3273b1b0afa4f2cd22748a7c6c8bc0c5dd3ea0590c3.scope - libcontainer container 34fc6bafe238a2718b7dd3273b1b0afa4f2cd22748a7c6c8bc0c5dd3ea0590c3. Nov 1 00:22:43.089208 systemd[1]: Started cri-containerd-3fe6a887c91a0707218efe630c21f9d62918763a646f0593a7300fa947f2029f.scope - libcontainer container 3fe6a887c91a0707218efe630c21f9d62918763a646f0593a7300fa947f2029f. Nov 1 00:22:43.100224 systemd[1]: Started cri-containerd-8ca59e3f29c0e8f12519bc4fca7787658a9dbf9f975da64d46a1472b0856af78.scope - libcontainer container 8ca59e3f29c0e8f12519bc4fca7787658a9dbf9f975da64d46a1472b0856af78. Nov 1 00:22:43.142505 kubelet[2177]: W1101 00:22:43.141264 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.159.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:43.143458 kubelet[2177]: E1101 00:22:43.143395 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.159.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:43.162470 containerd[1475]: time="2025-11-01T00:22:43.162323685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-159-149,Uid:2e39adfb74a9afcfe7924a44f26ba4be,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fe6a887c91a0707218efe630c21f9d62918763a646f0593a7300fa947f2029f\"" Nov 1 00:22:43.166165 kubelet[2177]: E1101 00:22:43.166136 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:43.173874 containerd[1475]: time="2025-11-01T00:22:43.173825855Z" level=info msg="CreateContainer within sandbox \"3fe6a887c91a0707218efe630c21f9d62918763a646f0593a7300fa947f2029f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:22:43.182627 containerd[1475]: time="2025-11-01T00:22:43.182597175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-159-149,Uid:792d6a04aad0afb682ea306a711ba247,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ca59e3f29c0e8f12519bc4fca7787658a9dbf9f975da64d46a1472b0856af78\"" Nov 1 00:22:43.183705 kubelet[2177]: E1101 00:22:43.183681 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:43.185652 containerd[1475]: time="2025-11-01T00:22:43.185633885Z" level=info msg="CreateContainer within sandbox \"8ca59e3f29c0e8f12519bc4fca7787658a9dbf9f975da64d46a1472b0856af78\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:22:43.188844 containerd[1475]: time="2025-11-01T00:22:43.188817795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-159-149,Uid:f16dd616470c49c4779dec1b68bf7bce,Namespace:kube-system,Attempt:0,} returns sandbox id \"34fc6bafe238a2718b7dd3273b1b0afa4f2cd22748a7c6c8bc0c5dd3ea0590c3\"" Nov 1 00:22:43.189465 kubelet[2177]: E1101 00:22:43.189448 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:43.191645 containerd[1475]: time="2025-11-01T00:22:43.191437445Z" level=info msg="CreateContainer within sandbox \"34fc6bafe238a2718b7dd3273b1b0afa4f2cd22748a7c6c8bc0c5dd3ea0590c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:22:43.192671 containerd[1475]: time="2025-11-01T00:22:43.192648355Z" level=info msg="CreateContainer within sandbox \"3fe6a887c91a0707218efe630c21f9d62918763a646f0593a7300fa947f2029f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02eb95adef5ab46cb1336cf8ffda0ca40b603cd8f4a2f2b23e52e2f907dc223f\"" Nov 1 00:22:43.193807 containerd[1475]: time="2025-11-01T00:22:43.193540485Z" level=info msg="StartContainer for \"02eb95adef5ab46cb1336cf8ffda0ca40b603cd8f4a2f2b23e52e2f907dc223f\"" Nov 1 00:22:43.202745 kubelet[2177]: E1101 00:22:43.202470 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.159.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-159-149?timeout=10s\": dial tcp 172.237.159.149:6443: connect: connection refused" interval="1.6s" Nov 1 00:22:43.202787 containerd[1475]: time="2025-11-01T00:22:43.202768615Z" level=info msg="CreateContainer within sandbox \"8ca59e3f29c0e8f12519bc4fca7787658a9dbf9f975da64d46a1472b0856af78\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"213fd19627aa546668a7b07e32b04502d7fb6325e5294462e22cdbcc76ca7b44\"" Nov 1 00:22:43.204290 containerd[1475]: time="2025-11-01T00:22:43.204237065Z" level=info msg="StartContainer for \"213fd19627aa546668a7b07e32b04502d7fb6325e5294462e22cdbcc76ca7b44\"" Nov 1 00:22:43.207989 containerd[1475]: time="2025-11-01T00:22:43.207564805Z" level=info msg="CreateContainer within sandbox \"34fc6bafe238a2718b7dd3273b1b0afa4f2cd22748a7c6c8bc0c5dd3ea0590c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"802ab6c4322fe10845eb90fa574ae85b6d0a2c4c55bb3789158c9c6d670c875b\"" Nov 1 00:22:43.208618 containerd[1475]: time="2025-11-01T00:22:43.208601355Z" level=info msg="StartContainer for \"802ab6c4322fe10845eb90fa574ae85b6d0a2c4c55bb3789158c9c6d670c875b\"" Nov 1 00:22:43.248612 systemd[1]: Started cri-containerd-02eb95adef5ab46cb1336cf8ffda0ca40b603cd8f4a2f2b23e52e2f907dc223f.scope - libcontainer container 02eb95adef5ab46cb1336cf8ffda0ca40b603cd8f4a2f2b23e52e2f907dc223f. Nov 1 00:22:43.257600 systemd[1]: Started cri-containerd-213fd19627aa546668a7b07e32b04502d7fb6325e5294462e22cdbcc76ca7b44.scope - libcontainer container 213fd19627aa546668a7b07e32b04502d7fb6325e5294462e22cdbcc76ca7b44. Nov 1 00:22:43.258299 kubelet[2177]: E1101 00:22:43.258155 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.159.149:6443/api/v1/namespaces/default/events\": dial tcp 172.237.159.149:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-159-149.1873ba285e9b8f03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-159-149,UID:172-237-159-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-159-149,},FirstTimestamp:2025-11-01 00:22:41.780707075 +0000 UTC m=+0.437464321,LastTimestamp:2025-11-01 00:22:41.780707075 +0000 UTC m=+0.437464321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-159-149,}" Nov 1 00:22:43.260117 kubelet[2177]: W1101 00:22:43.260029 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.159.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.159.149:6443: connect: connection refused Nov 1 00:22:43.260117 kubelet[2177]: E1101 00:22:43.260092 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.159.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.159.149:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:43.266647 systemd[1]: Started cri-containerd-802ab6c4322fe10845eb90fa574ae85b6d0a2c4c55bb3789158c9c6d670c875b.scope - libcontainer container 802ab6c4322fe10845eb90fa574ae85b6d0a2c4c55bb3789158c9c6d670c875b. Nov 1 00:22:43.355628 containerd[1475]: time="2025-11-01T00:22:43.355580325Z" level=info msg="StartContainer for \"213fd19627aa546668a7b07e32b04502d7fb6325e5294462e22cdbcc76ca7b44\" returns successfully" Nov 1 00:22:43.357921 containerd[1475]: time="2025-11-01T00:22:43.355904695Z" level=info msg="StartContainer for \"02eb95adef5ab46cb1336cf8ffda0ca40b603cd8f4a2f2b23e52e2f907dc223f\" returns successfully" Nov 1 00:22:43.368940 containerd[1475]: time="2025-11-01T00:22:43.368788885Z" level=info msg="StartContainer for \"802ab6c4322fe10845eb90fa574ae85b6d0a2c4c55bb3789158c9c6d670c875b\" returns successfully" Nov 1 00:22:43.385508 kubelet[2177]: I1101 00:22:43.384049 2177 kubelet_node_status.go:75] "Attempting to register node" node="172-237-159-149" Nov 1 00:22:43.385508 kubelet[2177]: E1101 00:22:43.384320 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.159.149:6443/api/v1/nodes\": dial tcp 172.237.159.149:6443: connect: connection refused" node="172-237-159-149" Nov 1 00:22:43.841649 kubelet[2177]: E1101 00:22:43.841411 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:43.841649 kubelet[2177]: E1101 00:22:43.841550 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:43.845532 kubelet[2177]: E1101 00:22:43.845359 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:43.845532 kubelet[2177]: E1101 00:22:43.845446 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:43.849521 kubelet[2177]: E1101 00:22:43.847033 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:43.849521 kubelet[2177]: E1101 00:22:43.847108 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:44.808928 kubelet[2177]: E1101 00:22:44.808889 2177 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:44.851600 kubelet[2177]: E1101 00:22:44.851549 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:44.852322 kubelet[2177]: E1101 00:22:44.851670 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:44.852322 kubelet[2177]: E1101 00:22:44.852078 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-159-149\" not found" node="172-237-159-149" Nov 1 00:22:44.853518 kubelet[2177]: E1101 00:22:44.852380 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:44.986750 kubelet[2177]: I1101 00:22:44.986521 2177 kubelet_node_status.go:75] "Attempting to register node" node="172-237-159-149" Nov 1 00:22:44.993336 kubelet[2177]: I1101 00:22:44.993271 2177 kubelet_node_status.go:78] "Successfully registered node" node="172-237-159-149" Nov 1 00:22:44.998673 kubelet[2177]: I1101 00:22:44.998369 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:45.007125 kubelet[2177]: E1101 00:22:45.007059 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-159-149\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:45.007125 kubelet[2177]: I1101 00:22:45.007104 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:45.009697 kubelet[2177]: E1101 00:22:45.009665 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-159-149\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:45.009741 kubelet[2177]: I1101 00:22:45.009696 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:45.010965 kubelet[2177]: E1101 00:22:45.010924 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-159-149\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:45.772451 kubelet[2177]: I1101 00:22:45.772363 2177 apiserver.go:52] "Watching apiserver" Nov 1 00:22:45.800030 kubelet[2177]: I1101 00:22:45.799869 2177 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:46.184624 kubelet[2177]: I1101 00:22:46.184447 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:46.193715 kubelet[2177]: E1101 00:22:46.193664 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:46.422266 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:22:46.565292 kubelet[2177]: I1101 00:22:46.564791 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:46.571898 kubelet[2177]: E1101 00:22:46.571848 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:46.792817 systemd[1]: Reloading requested from client PID 2453 ('systemctl') (unit session-7.scope)... Nov 1 00:22:46.792840 systemd[1]: Reloading... Nov 1 00:22:46.853980 kubelet[2177]: E1101 00:22:46.853846 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:46.855508 kubelet[2177]: E1101 00:22:46.854357 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:46.897895 zram_generator::config[2492]: No configuration found. Nov 1 00:22:47.063046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:47.152961 systemd[1]: Reloading finished in 359 ms. Nov 1 00:22:47.210616 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:47.224527 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:47.224804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:47.234042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:47.430506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:47.442068 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:47.498560 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:47.498560 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:47.498560 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:47.499326 kubelet[2543]: I1101 00:22:47.498674 2543 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:47.506914 kubelet[2543]: I1101 00:22:47.506825 2543 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:47.506914 kubelet[2543]: I1101 00:22:47.506844 2543 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:47.507052 kubelet[2543]: I1101 00:22:47.507028 2543 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:47.508223 kubelet[2543]: I1101 00:22:47.508162 2543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:22:47.510743 kubelet[2543]: I1101 00:22:47.510717 2543 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:47.515568 kubelet[2543]: E1101 00:22:47.515546 2543 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:47.517413 kubelet[2543]: I1101 00:22:47.515634 2543 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:47.521339 kubelet[2543]: I1101 00:22:47.521323 2543 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:47.521723 kubelet[2543]: I1101 00:22:47.521695 2543 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:47.522022 kubelet[2543]: I1101 00:22:47.521855 2543 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-159-149","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:47.522147 kubelet[2543]: I1101 00:22:47.522133 2543 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:47.522200 kubelet[2543]: I1101 00:22:47.522192 2543 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:47.522285 kubelet[2543]: I1101 00:22:47.522276 2543 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:47.522574 kubelet[2543]: I1101 00:22:47.522559 2543 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:47.522646 kubelet[2543]: I1101 00:22:47.522628 2543 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:47.522902 kubelet[2543]: I1101 00:22:47.522890 2543 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:47.522974 kubelet[2543]: I1101 00:22:47.522959 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:47.530499 kubelet[2543]: I1101 00:22:47.527975 2543 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:47.530499 kubelet[2543]: I1101 00:22:47.528318 2543 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:47.530499 kubelet[2543]: I1101 00:22:47.529314 2543 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:47.530499 kubelet[2543]: I1101 00:22:47.529337 2543 server.go:1287] "Started kubelet" Nov 1 00:22:47.532037 kubelet[2543]: I1101 00:22:47.531740 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:47.534082 kubelet[2543]: I1101 00:22:47.534044 2543 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:47.551294 kubelet[2543]: I1101 00:22:47.551259 2543 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:47.552042 kubelet[2543]: I1101 00:22:47.536180 2543 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:47.552112 kubelet[2543]: I1101 00:22:47.534120 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:47.552318 kubelet[2543]: I1101 00:22:47.552296 2543 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:47.552360 kubelet[2543]: I1101 00:22:47.546241 2543 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:47.552441 kubelet[2543]: I1101 00:22:47.552415 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:47.554470 kubelet[2543]: I1101 00:22:47.534365 2543 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:47.554470 kubelet[2543]: I1101 00:22:47.536189 2543 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:47.554470 kubelet[2543]: I1101 00:22:47.554433 2543 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:47.554470 kubelet[2543]: E1101 00:22:47.536272 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-159-149\" not found" Nov 1 00:22:47.558078 kubelet[2543]: I1101 00:22:47.558034 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:47.560974 kubelet[2543]: I1101 00:22:47.560136 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:47.560974 kubelet[2543]: I1101 00:22:47.560160 2543 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:47.560974 kubelet[2543]: I1101 00:22:47.560175 2543 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:47.560974 kubelet[2543]: I1101 00:22:47.560181 2543 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:47.560974 kubelet[2543]: E1101 00:22:47.560223 2543 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:47.567777 kubelet[2543]: I1101 00:22:47.566782 2543 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:47.586692 kubelet[2543]: E1101 00:22:47.586666 2543 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:47.625127 kubelet[2543]: I1101 00:22:47.625105 2543 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:47.625256 kubelet[2543]: I1101 00:22:47.625241 2543 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:47.625409 kubelet[2543]: I1101 00:22:47.625385 2543 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:47.625661 kubelet[2543]: I1101 00:22:47.625645 2543 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:22:47.625734 kubelet[2543]: I1101 00:22:47.625713 2543 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:22:47.625775 kubelet[2543]: I1101 00:22:47.625768 2543 policy_none.go:49] "None policy: Start" Nov 1 00:22:47.626000 kubelet[2543]: I1101 00:22:47.625992 2543 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:47.626046 kubelet[2543]: I1101 00:22:47.626038 2543 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:47.626171 kubelet[2543]: I1101 00:22:47.626159 2543 state_mem.go:75] "Updated machine memory state" Nov 1 00:22:47.630751 kubelet[2543]: I1101 00:22:47.630736 2543 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:47.631473 kubelet[2543]: I1101 00:22:47.631435 2543 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:47.631968 kubelet[2543]: I1101 00:22:47.631933 2543 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:47.632182 kubelet[2543]: I1101 00:22:47.632153 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:47.633188 kubelet[2543]: E1101 00:22:47.633159 2543 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:47.661299 kubelet[2543]: I1101 00:22:47.661244 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:47.661778 kubelet[2543]: I1101 00:22:47.661552 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:47.661853 kubelet[2543]: I1101 00:22:47.661652 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:47.669016 kubelet[2543]: E1101 00:22:47.668973 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-159-149\" already exists" pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:47.669213 kubelet[2543]: E1101 00:22:47.669135 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-159-149\" already exists" pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:47.735376 kubelet[2543]: I1101 00:22:47.735354 2543 kubelet_node_status.go:75] "Attempting to register node" node="172-237-159-149" Nov 1 00:22:47.741471 kubelet[2543]: I1101 00:22:47.741421 2543 kubelet_node_status.go:124] "Node was previously registered" node="172-237-159-149" Nov 1 00:22:47.741471 kubelet[2543]: I1101 00:22:47.741465 2543 kubelet_node_status.go:78] "Successfully registered node" node="172-237-159-149" Nov 1 00:22:47.755896 kubelet[2543]: I1101 00:22:47.755663 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f16dd616470c49c4779dec1b68bf7bce-ca-certs\") pod \"kube-apiserver-172-237-159-149\" (UID: \"f16dd616470c49c4779dec1b68bf7bce\") " pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:47.755896 kubelet[2543]: I1101 00:22:47.755880 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f16dd616470c49c4779dec1b68bf7bce-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-159-149\" (UID: \"f16dd616470c49c4779dec1b68bf7bce\") " pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:47.755975 kubelet[2543]: I1101 00:22:47.755901 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-flexvolume-dir\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:47.755975 kubelet[2543]: I1101 00:22:47.755916 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/792d6a04aad0afb682ea306a711ba247-kubeconfig\") pod \"kube-scheduler-172-237-159-149\" (UID: \"792d6a04aad0afb682ea306a711ba247\") " pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:47.755975 kubelet[2543]: I1101 00:22:47.755933 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:47.755975 kubelet[2543]: I1101 00:22:47.755947 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f16dd616470c49c4779dec1b68bf7bce-k8s-certs\") pod \"kube-apiserver-172-237-159-149\" (UID: \"f16dd616470c49c4779dec1b68bf7bce\") " pod="kube-system/kube-apiserver-172-237-159-149" Nov 1 00:22:47.755975 kubelet[2543]: I1101 00:22:47.755959 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-ca-certs\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:47.756074 kubelet[2543]: I1101 00:22:47.755973 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-k8s-certs\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:47.756074 kubelet[2543]: I1101 00:22:47.755986 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e39adfb74a9afcfe7924a44f26ba4be-kubeconfig\") pod \"kube-controller-manager-172-237-159-149\" (UID: \"2e39adfb74a9afcfe7924a44f26ba4be\") " pod="kube-system/kube-controller-manager-172-237-159-149" Nov 1 00:22:47.970675 kubelet[2543]: E1101 00:22:47.970031 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:47.970675 kubelet[2543]: E1101 00:22:47.970178 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:47.970675 kubelet[2543]: E1101 00:22:47.970614 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:48.525397 kubelet[2543]: I1101 00:22:48.524258 2543 apiserver.go:52] "Watching apiserver" Nov 1 00:22:48.555264 kubelet[2543]: I1101 00:22:48.555201 2543 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:48.600547 kubelet[2543]: E1101 00:22:48.600471 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:48.600919 kubelet[2543]: I1101 00:22:48.600889 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:48.601276 kubelet[2543]: E1101 00:22:48.601227 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:48.613019 kubelet[2543]: E1101 00:22:48.612910 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-159-149\" already exists" pod="kube-system/kube-scheduler-172-237-159-149" Nov 1 00:22:48.613019 kubelet[2543]: E1101 00:22:48.613006 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:48.628018 kubelet[2543]: I1101 00:22:48.627732 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-159-149" podStartSLOduration=2.627723905 podStartE2EDuration="2.627723905s" podCreationTimestamp="2025-11-01 00:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:48.626523185 +0000 UTC m=+1.178987131" watchObservedRunningTime="2025-11-01 00:22:48.627723905 +0000 UTC m=+1.180187811" Nov 1 00:22:48.635201 kubelet[2543]: I1101 00:22:48.635155 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-159-149" podStartSLOduration=1.6351477650000001 podStartE2EDuration="1.635147765s" podCreationTimestamp="2025-11-01 00:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:48.634062345 +0000 UTC m=+1.186526251" watchObservedRunningTime="2025-11-01 00:22:48.635147765 +0000 UTC m=+1.187611691" Nov 1 00:22:49.603205 kubelet[2543]: E1101 00:22:49.602847 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:49.603205 kubelet[2543]: E1101 00:22:49.603039 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:50.604533 kubelet[2543]: E1101 00:22:50.604460 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:51.619270 kubelet[2543]: I1101 00:22:51.619224 2543 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:22:51.619994 kubelet[2543]: I1101 00:22:51.619903 2543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:22:51.620037 containerd[1475]: time="2025-11-01T00:22:51.619681003Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:22:52.127278 kubelet[2543]: I1101 00:22:52.127209 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-159-149" podStartSLOduration=6.12719218 podStartE2EDuration="6.12719218s" podCreationTimestamp="2025-11-01 00:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:48.640684445 +0000 UTC m=+1.193148351" watchObservedRunningTime="2025-11-01 00:22:52.12719218 +0000 UTC m=+4.679656086" Nov 1 00:22:52.144339 systemd[1]: Created slice kubepods-besteffort-podd4b347d0_49cc_4b43_ba3d_6988ec9ec13c.slice - libcontainer container kubepods-besteffort-podd4b347d0_49cc_4b43_ba3d_6988ec9ec13c.slice. Nov 1 00:22:52.291502 kubelet[2543]: I1101 00:22:52.291450 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4b347d0-49cc-4b43-ba3d-6988ec9ec13c-kube-proxy\") pod \"kube-proxy-sgdpf\" (UID: \"d4b347d0-49cc-4b43-ba3d-6988ec9ec13c\") " pod="kube-system/kube-proxy-sgdpf" Nov 1 00:22:52.291649 kubelet[2543]: I1101 00:22:52.291533 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4b347d0-49cc-4b43-ba3d-6988ec9ec13c-xtables-lock\") pod \"kube-proxy-sgdpf\" (UID: \"d4b347d0-49cc-4b43-ba3d-6988ec9ec13c\") " pod="kube-system/kube-proxy-sgdpf" Nov 1 00:22:52.291649 kubelet[2543]: I1101 00:22:52.291561 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db5m5\" (UniqueName: \"kubernetes.io/projected/d4b347d0-49cc-4b43-ba3d-6988ec9ec13c-kube-api-access-db5m5\") pod \"kube-proxy-sgdpf\" (UID: \"d4b347d0-49cc-4b43-ba3d-6988ec9ec13c\") " pod="kube-system/kube-proxy-sgdpf" Nov 1 00:22:52.291649 kubelet[2543]: I1101 00:22:52.291586 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4b347d0-49cc-4b43-ba3d-6988ec9ec13c-lib-modules\") pod \"kube-proxy-sgdpf\" (UID: \"d4b347d0-49cc-4b43-ba3d-6988ec9ec13c\") " pod="kube-system/kube-proxy-sgdpf" Nov 1 00:22:52.454775 kubelet[2543]: E1101 00:22:52.454740 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:52.455963 containerd[1475]: time="2025-11-01T00:22:52.455911547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgdpf,Uid:d4b347d0-49cc-4b43-ba3d-6988ec9ec13c,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:52.529433 containerd[1475]: time="2025-11-01T00:22:52.528218855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:52.529433 containerd[1475]: time="2025-11-01T00:22:52.528301491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:52.529433 containerd[1475]: time="2025-11-01T00:22:52.528317962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:52.529433 containerd[1475]: time="2025-11-01T00:22:52.529020090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:52.576430 systemd[1]: Started cri-containerd-d1bf346cf8895b149126028e617756869eac5516a4104db4f9ec0cd88d19e080.scope - libcontainer container d1bf346cf8895b149126028e617756869eac5516a4104db4f9ec0cd88d19e080. Nov 1 00:22:52.623942 containerd[1475]: time="2025-11-01T00:22:52.623871324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgdpf,Uid:d4b347d0-49cc-4b43-ba3d-6988ec9ec13c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1bf346cf8895b149126028e617756869eac5516a4104db4f9ec0cd88d19e080\"" Nov 1 00:22:52.626157 kubelet[2543]: E1101 00:22:52.625676 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:52.630209 containerd[1475]: time="2025-11-01T00:22:52.629983506Z" level=info msg="CreateContainer within sandbox \"d1bf346cf8895b149126028e617756869eac5516a4104db4f9ec0cd88d19e080\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:22:52.665409 containerd[1475]: time="2025-11-01T00:22:52.664824319Z" level=info msg="CreateContainer within sandbox \"d1bf346cf8895b149126028e617756869eac5516a4104db4f9ec0cd88d19e080\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"350256df1a1169da1aa3fcf67055f7a65eae1d2ab1ffaa698e656d4a941d8d12\"" Nov 1 00:22:52.667640 containerd[1475]: time="2025-11-01T00:22:52.666689738Z" level=info msg="StartContainer for \"350256df1a1169da1aa3fcf67055f7a65eae1d2ab1ffaa698e656d4a941d8d12\"" Nov 1 00:22:52.711955 systemd[1]: Created slice kubepods-besteffort-podf66c0a8a_4687_4403_ad7b_745356623911.slice - libcontainer container kubepods-besteffort-podf66c0a8a_4687_4403_ad7b_745356623911.slice. Nov 1 00:22:52.721874 systemd[1]: Started cri-containerd-350256df1a1169da1aa3fcf67055f7a65eae1d2ab1ffaa698e656d4a941d8d12.scope - libcontainer container 350256df1a1169da1aa3fcf67055f7a65eae1d2ab1ffaa698e656d4a941d8d12. Nov 1 00:22:52.758986 containerd[1475]: time="2025-11-01T00:22:52.758883898Z" level=info msg="StartContainer for \"350256df1a1169da1aa3fcf67055f7a65eae1d2ab1ffaa698e656d4a941d8d12\" returns successfully" Nov 1 00:22:52.798532 kubelet[2543]: I1101 00:22:52.797889 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4j7\" (UniqueName: \"kubernetes.io/projected/f66c0a8a-4687-4403-ad7b-745356623911-kube-api-access-wp4j7\") pod \"tigera-operator-7dcd859c48-fvjxd\" (UID: \"f66c0a8a-4687-4403-ad7b-745356623911\") " pod="tigera-operator/tigera-operator-7dcd859c48-fvjxd" Nov 1 00:22:52.798532 kubelet[2543]: I1101 00:22:52.797931 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f66c0a8a-4687-4403-ad7b-745356623911-var-lib-calico\") pod \"tigera-operator-7dcd859c48-fvjxd\" (UID: \"f66c0a8a-4687-4403-ad7b-745356623911\") " pod="tigera-operator/tigera-operator-7dcd859c48-fvjxd" Nov 1 00:22:53.018575 containerd[1475]: time="2025-11-01T00:22:53.018084830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fvjxd,Uid:f66c0a8a-4687-4403-ad7b-745356623911,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:22:53.048282 containerd[1475]: time="2025-11-01T00:22:53.047085136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:53.048732 containerd[1475]: time="2025-11-01T00:22:53.048661838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:53.048956 containerd[1475]: time="2025-11-01T00:22:53.048923914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:53.049419 containerd[1475]: time="2025-11-01T00:22:53.049261576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:53.075620 systemd[1]: Started cri-containerd-11790b06da4b1b5784724f96cae7b0738e3c1980a16a503e0f62fa9ffc21dbec.scope - libcontainer container 11790b06da4b1b5784724f96cae7b0738e3c1980a16a503e0f62fa9ffc21dbec. Nov 1 00:22:53.133468 containerd[1475]: time="2025-11-01T00:22:53.133103409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fvjxd,Uid:f66c0a8a-4687-4403-ad7b-745356623911,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"11790b06da4b1b5784724f96cae7b0738e3c1980a16a503e0f62fa9ffc21dbec\"" Nov 1 00:22:53.138583 containerd[1475]: time="2025-11-01T00:22:53.138148835Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:22:53.612915 kubelet[2543]: E1101 00:22:53.612854 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:53.624162 kubelet[2543]: I1101 00:22:53.624094 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sgdpf" podStartSLOduration=1.624075632 podStartE2EDuration="1.624075632s" podCreationTimestamp="2025-11-01 00:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:53.62389816 +0000 UTC m=+6.176362086" watchObservedRunningTime="2025-11-01 00:22:53.624075632 +0000 UTC m=+6.176539538" Nov 1 00:22:54.475164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178436838.mount: Deactivated successfully. Nov 1 00:22:55.364772 kubelet[2543]: E1101 00:22:55.364410 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:55.619750 kubelet[2543]: E1101 00:22:55.617873 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:55.850013 containerd[1475]: time="2025-11-01T00:22:55.849930144Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:55.851113 containerd[1475]: time="2025-11-01T00:22:55.850903770Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:22:55.851660 containerd[1475]: time="2025-11-01T00:22:55.851626051Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:55.854880 containerd[1475]: time="2025-11-01T00:22:55.854459022Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:55.855606 containerd[1475]: time="2025-11-01T00:22:55.855336572Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.717154014s" Nov 1 00:22:55.855606 containerd[1475]: time="2025-11-01T00:22:55.855592416Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:22:55.858686 containerd[1475]: time="2025-11-01T00:22:55.858368644Z" level=info msg="CreateContainer within sandbox \"11790b06da4b1b5784724f96cae7b0738e3c1980a16a503e0f62fa9ffc21dbec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:22:55.869096 containerd[1475]: time="2025-11-01T00:22:55.869061822Z" level=info msg="CreateContainer within sandbox \"11790b06da4b1b5784724f96cae7b0738e3c1980a16a503e0f62fa9ffc21dbec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8319200b1325a366a39e4c04b6cf49228973063e53bb3c1a02a076118ba499ee\"" Nov 1 00:22:55.870024 containerd[1475]: time="2025-11-01T00:22:55.869933391Z" level=info msg="StartContainer for \"8319200b1325a366a39e4c04b6cf49228973063e53bb3c1a02a076118ba499ee\"" Nov 1 00:22:55.916670 systemd[1]: Started cri-containerd-8319200b1325a366a39e4c04b6cf49228973063e53bb3c1a02a076118ba499ee.scope - libcontainer container 8319200b1325a366a39e4c04b6cf49228973063e53bb3c1a02a076118ba499ee. Nov 1 00:22:55.949723 containerd[1475]: time="2025-11-01T00:22:55.949629642Z" level=info msg="StartContainer for \"8319200b1325a366a39e4c04b6cf49228973063e53bb3c1a02a076118ba499ee\" returns successfully" Nov 1 00:22:56.619880 kubelet[2543]: E1101 00:22:56.619817 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:56.631719 kubelet[2543]: I1101 00:22:56.631634 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-fvjxd" podStartSLOduration=1.911125399 podStartE2EDuration="4.631613486s" podCreationTimestamp="2025-11-01 00:22:52 +0000 UTC" firstStartedPulling="2025-11-01 00:22:53.136460766 +0000 UTC m=+5.688924682" lastFinishedPulling="2025-11-01 00:22:55.856948853 +0000 UTC m=+8.409412769" observedRunningTime="2025-11-01 00:22:56.630717808 +0000 UTC m=+9.183181724" watchObservedRunningTime="2025-11-01 00:22:56.631613486 +0000 UTC m=+9.184077392" Nov 1 00:22:56.857991 kubelet[2543]: E1101 00:22:56.857941 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:22:57.621851 kubelet[2543]: E1101 00:22:57.621807 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:00.153056 kubelet[2543]: E1101 00:23:00.153002 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:00.626238 kubelet[2543]: E1101 00:23:00.626193 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:01.165700 update_engine[1457]: I20251101 00:23:01.165603 1457 update_attempter.cc:509] Updating boot flags... Nov 1 00:23:01.215525 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2919) Nov 1 00:23:01.299802 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2923) Nov 1 00:23:01.365197 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2923) Nov 1 00:23:01.732473 sudo[1686]: pam_unix(sudo:session): session closed for user root Nov 1 00:23:01.784143 sshd[1683]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:01.788881 systemd[1]: sshd@6-172.237.159.149:22-139.178.68.195:38978.service: Deactivated successfully. Nov 1 00:23:01.791801 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:23:01.792366 systemd[1]: session-7.scope: Consumed 4.278s CPU time, 157.0M memory peak, 0B memory swap peak. Nov 1 00:23:01.795323 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:23:01.799682 systemd-logind[1450]: Removed session 7. Nov 1 00:23:06.772147 systemd[1]: Created slice kubepods-besteffort-podbece0d29_c5bd_499e_bec2_26cf81e62140.slice - libcontainer container kubepods-besteffort-podbece0d29_c5bd_499e_bec2_26cf81e62140.slice. Nov 1 00:23:06.795986 kubelet[2543]: I1101 00:23:06.795881 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbr2c\" (UniqueName: \"kubernetes.io/projected/bece0d29-c5bd-499e-bec2-26cf81e62140-kube-api-access-cbr2c\") pod \"calico-typha-7fd7f8bc88-nnxhm\" (UID: \"bece0d29-c5bd-499e-bec2-26cf81e62140\") " pod="calico-system/calico-typha-7fd7f8bc88-nnxhm" Nov 1 00:23:06.796438 kubelet[2543]: I1101 00:23:06.795998 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bece0d29-c5bd-499e-bec2-26cf81e62140-typha-certs\") pod \"calico-typha-7fd7f8bc88-nnxhm\" (UID: \"bece0d29-c5bd-499e-bec2-26cf81e62140\") " pod="calico-system/calico-typha-7fd7f8bc88-nnxhm" Nov 1 00:23:06.796438 kubelet[2543]: I1101 00:23:06.796025 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bece0d29-c5bd-499e-bec2-26cf81e62140-tigera-ca-bundle\") pod \"calico-typha-7fd7f8bc88-nnxhm\" (UID: \"bece0d29-c5bd-499e-bec2-26cf81e62140\") " pod="calico-system/calico-typha-7fd7f8bc88-nnxhm" Nov 1 00:23:06.967128 systemd[1]: Created slice kubepods-besteffort-podf9213f81_8444_4554_87c5_dd1c87f94618.slice - libcontainer container kubepods-besteffort-podf9213f81_8444_4554_87c5_dd1c87f94618.slice. Nov 1 00:23:06.999323 kubelet[2543]: I1101 00:23:06.999270 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-cni-net-dir\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:06.999323 kubelet[2543]: I1101 00:23:06.999331 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9213f81-8444-4554-87c5-dd1c87f94618-tigera-ca-bundle\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:06.999564 kubelet[2543]: I1101 00:23:06.999365 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-policysync\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:06.999564 kubelet[2543]: I1101 00:23:06.999383 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-var-run-calico\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:06.999564 kubelet[2543]: I1101 00:23:06.999404 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-cni-bin-dir\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:06.999564 kubelet[2543]: I1101 00:23:06.999422 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwk52\" (UniqueName: \"kubernetes.io/projected/f9213f81-8444-4554-87c5-dd1c87f94618-kube-api-access-nwk52\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:06.999564 kubelet[2543]: I1101 00:23:06.999443 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-cni-log-dir\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:06.999689 kubelet[2543]: I1101 00:23:06.999460 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-flexvol-driver-host\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:07.001666 kubelet[2543]: I1101 00:23:07.001375 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-var-lib-calico\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:07.001666 kubelet[2543]: I1101 00:23:07.001416 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-xtables-lock\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:07.001666 kubelet[2543]: I1101 00:23:07.001436 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9213f81-8444-4554-87c5-dd1c87f94618-lib-modules\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:07.001666 kubelet[2543]: I1101 00:23:07.001455 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f9213f81-8444-4554-87c5-dd1c87f94618-node-certs\") pod \"calico-node-7ldz2\" (UID: \"f9213f81-8444-4554-87c5-dd1c87f94618\") " pod="calico-system/calico-node-7ldz2" Nov 1 00:23:07.077925 kubelet[2543]: E1101 00:23:07.077588 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:07.078551 containerd[1475]: time="2025-11-01T00:23:07.078189852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fd7f8bc88-nnxhm,Uid:bece0d29-c5bd-499e-bec2-26cf81e62140,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:07.112558 kubelet[2543]: E1101 00:23:07.111074 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.112558 kubelet[2543]: W1101 00:23:07.111099 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.112558 kubelet[2543]: E1101 00:23:07.111133 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.116153 kubelet[2543]: E1101 00:23:07.115697 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.116153 kubelet[2543]: W1101 00:23:07.115709 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.116153 kubelet[2543]: E1101 00:23:07.115721 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.116589 containerd[1475]: time="2025-11-01T00:23:07.115826528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:07.116589 containerd[1475]: time="2025-11-01T00:23:07.115901330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:07.116589 containerd[1475]: time="2025-11-01T00:23:07.116101365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:07.119889 kubelet[2543]: E1101 00:23:07.119280 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.119889 kubelet[2543]: W1101 00:23:07.119613 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.119889 kubelet[2543]: E1101 00:23:07.119627 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.120429 kubelet[2543]: E1101 00:23:07.120318 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.120429 kubelet[2543]: W1101 00:23:07.120329 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.120429 kubelet[2543]: E1101 00:23:07.120341 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.121089 kubelet[2543]: E1101 00:23:07.120906 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.121089 kubelet[2543]: W1101 00:23:07.120922 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.121089 kubelet[2543]: E1101 00:23:07.120931 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.121989 kubelet[2543]: E1101 00:23:07.121648 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.121989 kubelet[2543]: W1101 00:23:07.121660 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.121989 kubelet[2543]: E1101 00:23:07.121670 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.122829 kubelet[2543]: E1101 00:23:07.122815 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.123134 kubelet[2543]: W1101 00:23:07.122905 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.123229 kubelet[2543]: E1101 00:23:07.123183 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.124535 containerd[1475]: time="2025-11-01T00:23:07.124227948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:07.124612 kubelet[2543]: E1101 00:23:07.124357 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.124612 kubelet[2543]: W1101 00:23:07.124367 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.124612 kubelet[2543]: E1101 00:23:07.124377 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.125955 kubelet[2543]: E1101 00:23:07.125799 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.125955 kubelet[2543]: W1101 00:23:07.125812 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.125955 kubelet[2543]: E1101 00:23:07.125860 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.126692 kubelet[2543]: E1101 00:23:07.126561 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.126692 kubelet[2543]: W1101 00:23:07.126601 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.126692 kubelet[2543]: E1101 00:23:07.126615 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.127829 kubelet[2543]: E1101 00:23:07.127673 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.127829 kubelet[2543]: W1101 00:23:07.127685 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.127829 kubelet[2543]: E1101 00:23:07.127694 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.130524 kubelet[2543]: E1101 00:23:07.128265 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.130524 kubelet[2543]: W1101 00:23:07.128278 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.130524 kubelet[2543]: E1101 00:23:07.128288 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.133773 kubelet[2543]: E1101 00:23:07.133638 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.133773 kubelet[2543]: W1101 00:23:07.133669 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.133773 kubelet[2543]: E1101 00:23:07.133705 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.135865 kubelet[2543]: E1101 00:23:07.135081 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.135865 kubelet[2543]: W1101 00:23:07.135148 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.135865 kubelet[2543]: E1101 00:23:07.135165 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.158667 systemd[1]: Started cri-containerd-52a8fd0930cd4b0953a269104b9bbad94ac1fd5cdbab6fee0961727dc06d8465.scope - libcontainer container 52a8fd0930cd4b0953a269104b9bbad94ac1fd5cdbab6fee0961727dc06d8465. Nov 1 00:23:07.231623 containerd[1475]: time="2025-11-01T00:23:07.231564741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fd7f8bc88-nnxhm,Uid:bece0d29-c5bd-499e-bec2-26cf81e62140,Namespace:calico-system,Attempt:0,} returns sandbox id \"52a8fd0930cd4b0953a269104b9bbad94ac1fd5cdbab6fee0961727dc06d8465\"" Nov 1 00:23:07.235098 kubelet[2543]: E1101 00:23:07.234600 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:07.235098 kubelet[2543]: E1101 00:23:07.234704 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:07.237331 containerd[1475]: time="2025-11-01T00:23:07.237076205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:23:07.271914 kubelet[2543]: E1101 00:23:07.271880 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:07.272526 containerd[1475]: time="2025-11-01T00:23:07.272433481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7ldz2,Uid:f9213f81-8444-4554-87c5-dd1c87f94618,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:07.301965 kubelet[2543]: E1101 00:23:07.301876 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.301965 kubelet[2543]: W1101 00:23:07.301900 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.301965 kubelet[2543]: E1101 00:23:07.301919 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.302814 kubelet[2543]: E1101 00:23:07.302125 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.302814 kubelet[2543]: W1101 00:23:07.302134 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.302814 kubelet[2543]: E1101 00:23:07.302144 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.302814 kubelet[2543]: E1101 00:23:07.302646 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.302814 kubelet[2543]: W1101 00:23:07.302656 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.302814 kubelet[2543]: E1101 00:23:07.302666 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.303675 kubelet[2543]: E1101 00:23:07.303050 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.303675 kubelet[2543]: W1101 00:23:07.303060 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.303675 kubelet[2543]: E1101 00:23:07.303108 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.303675 kubelet[2543]: E1101 00:23:07.303534 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.303675 kubelet[2543]: W1101 00:23:07.303543 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.303675 kubelet[2543]: E1101 00:23:07.303552 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.304452 kubelet[2543]: E1101 00:23:07.303858 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.304452 kubelet[2543]: W1101 00:23:07.303868 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.304452 kubelet[2543]: E1101 00:23:07.303877 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.304452 kubelet[2543]: E1101 00:23:07.304095 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.304452 kubelet[2543]: W1101 00:23:07.304104 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.304452 kubelet[2543]: E1101 00:23:07.304112 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.304452 kubelet[2543]: E1101 00:23:07.304331 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.304452 kubelet[2543]: W1101 00:23:07.304339 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.304452 kubelet[2543]: E1101 00:23:07.304347 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.305645 kubelet[2543]: E1101 00:23:07.304656 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.305645 kubelet[2543]: W1101 00:23:07.304666 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.305645 kubelet[2543]: E1101 00:23:07.304675 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.305645 kubelet[2543]: E1101 00:23:07.305019 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.305645 kubelet[2543]: W1101 00:23:07.305029 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.305645 kubelet[2543]: E1101 00:23:07.305037 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.305645 kubelet[2543]: E1101 00:23:07.305261 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.305645 kubelet[2543]: W1101 00:23:07.305269 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.305645 kubelet[2543]: E1101 00:23:07.305277 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.305645 kubelet[2543]: E1101 00:23:07.305523 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.305935 kubelet[2543]: W1101 00:23:07.305534 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.305935 kubelet[2543]: E1101 00:23:07.305543 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.305935 kubelet[2543]: E1101 00:23:07.305823 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.305935 kubelet[2543]: W1101 00:23:07.305832 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.305935 kubelet[2543]: E1101 00:23:07.305841 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.306133 kubelet[2543]: E1101 00:23:07.306063 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.306133 kubelet[2543]: W1101 00:23:07.306074 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.306133 kubelet[2543]: E1101 00:23:07.306084 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.307587 kubelet[2543]: E1101 00:23:07.306363 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.307587 kubelet[2543]: W1101 00:23:07.306373 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.307587 kubelet[2543]: E1101 00:23:07.306382 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.307587 kubelet[2543]: E1101 00:23:07.306735 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.307587 kubelet[2543]: W1101 00:23:07.306744 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.307587 kubelet[2543]: E1101 00:23:07.306753 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.307587 kubelet[2543]: E1101 00:23:07.307055 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.307587 kubelet[2543]: W1101 00:23:07.307063 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.307587 kubelet[2543]: E1101 00:23:07.307072 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.307587 kubelet[2543]: E1101 00:23:07.307267 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.307862 kubelet[2543]: W1101 00:23:07.307275 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.307862 kubelet[2543]: E1101 00:23:07.307283 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.307862 kubelet[2543]: E1101 00:23:07.307478 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.307862 kubelet[2543]: W1101 00:23:07.307583 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.307862 kubelet[2543]: E1101 00:23:07.307592 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.307862 kubelet[2543]: E1101 00:23:07.307806 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.307862 kubelet[2543]: W1101 00:23:07.307815 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.307862 kubelet[2543]: E1101 00:23:07.307823 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.308282 kubelet[2543]: E1101 00:23:07.308180 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.308282 kubelet[2543]: W1101 00:23:07.308199 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.308282 kubelet[2543]: E1101 00:23:07.308208 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.308282 kubelet[2543]: I1101 00:23:07.308249 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42a33fba-271a-4a52-bba9-06d9d0613c0c-kubelet-dir\") pod \"csi-node-driver-nw6x5\" (UID: \"42a33fba-271a-4a52-bba9-06d9d0613c0c\") " pod="calico-system/csi-node-driver-nw6x5" Nov 1 00:23:07.308705 kubelet[2543]: E1101 00:23:07.308562 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.308705 kubelet[2543]: W1101 00:23:07.308573 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.308705 kubelet[2543]: E1101 00:23:07.308602 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.308705 kubelet[2543]: I1101 00:23:07.308622 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42a33fba-271a-4a52-bba9-06d9d0613c0c-registration-dir\") pod \"csi-node-driver-nw6x5\" (UID: \"42a33fba-271a-4a52-bba9-06d9d0613c0c\") " pod="calico-system/csi-node-driver-nw6x5" Nov 1 00:23:07.309153 kubelet[2543]: E1101 00:23:07.308935 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.309153 kubelet[2543]: W1101 00:23:07.308946 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.309153 kubelet[2543]: E1101 00:23:07.308974 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.309153 kubelet[2543]: I1101 00:23:07.308992 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42a33fba-271a-4a52-bba9-06d9d0613c0c-socket-dir\") pod \"csi-node-driver-nw6x5\" (UID: \"42a33fba-271a-4a52-bba9-06d9d0613c0c\") " pod="calico-system/csi-node-driver-nw6x5" Nov 1 00:23:07.309434 kubelet[2543]: E1101 00:23:07.309251 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.309434 kubelet[2543]: W1101 00:23:07.309262 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.309434 kubelet[2543]: E1101 00:23:07.309274 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.309434 kubelet[2543]: I1101 00:23:07.309288 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/42a33fba-271a-4a52-bba9-06d9d0613c0c-varrun\") pod \"csi-node-driver-nw6x5\" (UID: \"42a33fba-271a-4a52-bba9-06d9d0613c0c\") " pod="calico-system/csi-node-driver-nw6x5" Nov 1 00:23:07.310145 kubelet[2543]: E1101 00:23:07.309663 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.310145 kubelet[2543]: W1101 00:23:07.309674 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.310145 kubelet[2543]: E1101 00:23:07.309760 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.310145 kubelet[2543]: I1101 00:23:07.309779 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8zcm\" (UniqueName: \"kubernetes.io/projected/42a33fba-271a-4a52-bba9-06d9d0613c0c-kube-api-access-c8zcm\") pod \"csi-node-driver-nw6x5\" (UID: \"42a33fba-271a-4a52-bba9-06d9d0613c0c\") " pod="calico-system/csi-node-driver-nw6x5" Nov 1 00:23:07.310145 kubelet[2543]: E1101 00:23:07.310043 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.310145 kubelet[2543]: W1101 00:23:07.310052 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.310145 kubelet[2543]: E1101 00:23:07.310145 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.311275 kubelet[2543]: E1101 00:23:07.310370 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.311275 kubelet[2543]: W1101 00:23:07.310380 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.311275 kubelet[2543]: E1101 00:23:07.310468 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.311275 kubelet[2543]: E1101 00:23:07.310734 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.311275 kubelet[2543]: W1101 00:23:07.310743 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.311275 kubelet[2543]: E1101 00:23:07.310804 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.311275 kubelet[2543]: E1101 00:23:07.310982 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.311275 kubelet[2543]: W1101 00:23:07.310989 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.311275 kubelet[2543]: E1101 00:23:07.311074 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.311275 kubelet[2543]: E1101 00:23:07.311278 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.311545 kubelet[2543]: W1101 00:23:07.311288 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.311545 kubelet[2543]: E1101 00:23:07.311369 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.312162 kubelet[2543]: E1101 00:23:07.311789 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.312162 kubelet[2543]: W1101 00:23:07.311804 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.312162 kubelet[2543]: E1101 00:23:07.311814 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.312162 kubelet[2543]: E1101 00:23:07.312050 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.312162 kubelet[2543]: W1101 00:23:07.312058 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.312162 kubelet[2543]: E1101 00:23:07.312066 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.312618 kubelet[2543]: E1101 00:23:07.312582 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.312618 kubelet[2543]: W1101 00:23:07.312605 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.312685 kubelet[2543]: E1101 00:23:07.312652 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.313139 kubelet[2543]: E1101 00:23:07.312984 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.313139 kubelet[2543]: W1101 00:23:07.313032 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.313139 kubelet[2543]: E1101 00:23:07.313044 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.313529 kubelet[2543]: E1101 00:23:07.313386 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.313529 kubelet[2543]: W1101 00:23:07.313433 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.313529 kubelet[2543]: E1101 00:23:07.313445 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.316920 containerd[1475]: time="2025-11-01T00:23:07.316800074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:07.318635 containerd[1475]: time="2025-11-01T00:23:07.317789329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:07.318635 containerd[1475]: time="2025-11-01T00:23:07.317829870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:07.318635 containerd[1475]: time="2025-11-01T00:23:07.318270932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:07.344625 systemd[1]: Started cri-containerd-306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798.scope - libcontainer container 306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798. Nov 1 00:23:07.400335 containerd[1475]: time="2025-11-01T00:23:07.400076736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7ldz2,Uid:f9213f81-8444-4554-87c5-dd1c87f94618,Namespace:calico-system,Attempt:0,} returns sandbox id \"306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798\"" Nov 1 00:23:07.402669 kubelet[2543]: E1101 00:23:07.401980 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:07.411549 kubelet[2543]: E1101 00:23:07.411512 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.411647 kubelet[2543]: W1101 00:23:07.411546 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.411647 kubelet[2543]: E1101 00:23:07.411580 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.413145 kubelet[2543]: E1101 00:23:07.412847 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.413145 kubelet[2543]: W1101 00:23:07.412869 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.413145 kubelet[2543]: E1101 00:23:07.412889 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.414249 kubelet[2543]: E1101 00:23:07.414220 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.414344 kubelet[2543]: W1101 00:23:07.414326 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.414512 kubelet[2543]: E1101 00:23:07.414446 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.415072 kubelet[2543]: E1101 00:23:07.414941 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.415072 kubelet[2543]: W1101 00:23:07.414953 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.415072 kubelet[2543]: E1101 00:23:07.414968 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.416633 kubelet[2543]: E1101 00:23:07.416600 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.416633 kubelet[2543]: W1101 00:23:07.416630 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.416720 kubelet[2543]: E1101 00:23:07.416661 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.417741 kubelet[2543]: E1101 00:23:07.417150 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.417741 kubelet[2543]: W1101 00:23:07.417164 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.417741 kubelet[2543]: E1101 00:23:07.417576 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.418071 kubelet[2543]: E1101 00:23:07.418058 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.418186 kubelet[2543]: W1101 00:23:07.418119 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.418186 kubelet[2543]: E1101 00:23:07.418141 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.419667 kubelet[2543]: E1101 00:23:07.419638 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.419667 kubelet[2543]: W1101 00:23:07.419663 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.420598 kubelet[2543]: E1101 00:23:07.420569 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.420702 kubelet[2543]: E1101 00:23:07.420677 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.420702 kubelet[2543]: W1101 00:23:07.420697 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.421620 kubelet[2543]: E1101 00:23:07.421592 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.422265 kubelet[2543]: E1101 00:23:07.422242 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.422265 kubelet[2543]: W1101 00:23:07.422262 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.422566 kubelet[2543]: E1101 00:23:07.422536 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.423663 kubelet[2543]: E1101 00:23:07.423622 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.423663 kubelet[2543]: W1101 00:23:07.423643 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.424192 kubelet[2543]: E1101 00:23:07.424148 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.426152 kubelet[2543]: E1101 00:23:07.425931 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.426152 kubelet[2543]: W1101 00:23:07.426138 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.427543 kubelet[2543]: E1101 00:23:07.427459 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.428230 kubelet[2543]: E1101 00:23:07.428204 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.428230 kubelet[2543]: W1101 00:23:07.428224 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.428716 kubelet[2543]: E1101 00:23:07.428665 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.428716 kubelet[2543]: W1101 00:23:07.428684 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.429738 kubelet[2543]: E1101 00:23:07.429712 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.429814 kubelet[2543]: E1101 00:23:07.429798 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.429879 kubelet[2543]: E1101 00:23:07.429872 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.429919 kubelet[2543]: W1101 00:23:07.429882 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.430165 kubelet[2543]: E1101 00:23:07.430117 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.430704 kubelet[2543]: E1101 00:23:07.430680 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.430704 kubelet[2543]: W1101 00:23:07.430700 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.431299 kubelet[2543]: E1101 00:23:07.431259 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.431570 kubelet[2543]: E1101 00:23:07.431540 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.431570 kubelet[2543]: W1101 00:23:07.431559 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.432690 kubelet[2543]: E1101 00:23:07.432642 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.433035 kubelet[2543]: E1101 00:23:07.432922 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.433035 kubelet[2543]: W1101 00:23:07.432938 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.433035 kubelet[2543]: E1101 00:23:07.432973 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.434601 kubelet[2543]: E1101 00:23:07.434559 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.434601 kubelet[2543]: W1101 00:23:07.434577 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.434774 kubelet[2543]: E1101 00:23:07.434674 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.435253 kubelet[2543]: E1101 00:23:07.435229 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.435253 kubelet[2543]: W1101 00:23:07.435245 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.435440 kubelet[2543]: E1101 00:23:07.435393 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.436073 kubelet[2543]: E1101 00:23:07.436039 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.436073 kubelet[2543]: W1101 00:23:07.436059 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.436660 kubelet[2543]: E1101 00:23:07.436547 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.437419 kubelet[2543]: E1101 00:23:07.437386 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.437419 kubelet[2543]: W1101 00:23:07.437410 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.439195 kubelet[2543]: E1101 00:23:07.439170 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.439195 kubelet[2543]: W1101 00:23:07.439191 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.440571 kubelet[2543]: E1101 00:23:07.439683 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.440571 kubelet[2543]: E1101 00:23:07.439723 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.440571 kubelet[2543]: E1101 00:23:07.440070 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.440571 kubelet[2543]: W1101 00:23:07.440082 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.440571 kubelet[2543]: E1101 00:23:07.440092 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.440910 kubelet[2543]: E1101 00:23:07.440855 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.440910 kubelet[2543]: W1101 00:23:07.440880 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.440910 kubelet[2543]: E1101 00:23:07.440893 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:07.481002 kubelet[2543]: E1101 00:23:07.480980 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:07.481165 kubelet[2543]: W1101 00:23:07.481080 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:07.481165 kubelet[2543]: E1101 00:23:07.481102 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:08.561213 kubelet[2543]: E1101 00:23:08.561154 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:08.564833 containerd[1475]: time="2025-11-01T00:23:08.564767221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:08.565608 containerd[1475]: time="2025-11-01T00:23:08.565469758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:23:08.567156 containerd[1475]: time="2025-11-01T00:23:08.566176186Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:08.569082 containerd[1475]: time="2025-11-01T00:23:08.568238066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:08.569082 containerd[1475]: time="2025-11-01T00:23:08.568979715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.331870168s" Nov 1 00:23:08.569082 containerd[1475]: time="2025-11-01T00:23:08.569002725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:23:08.570461 containerd[1475]: time="2025-11-01T00:23:08.570435090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:23:08.586771 containerd[1475]: time="2025-11-01T00:23:08.586737871Z" level=info msg="CreateContainer within sandbox \"52a8fd0930cd4b0953a269104b9bbad94ac1fd5cdbab6fee0961727dc06d8465\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:23:08.599061 containerd[1475]: time="2025-11-01T00:23:08.599036783Z" level=info msg="CreateContainer within sandbox \"52a8fd0930cd4b0953a269104b9bbad94ac1fd5cdbab6fee0961727dc06d8465\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"af8f2bbae22228ceaa994d2f1a53878fa9bd9fea190c172421da03353f27788d\"" Nov 1 00:23:08.599594 containerd[1475]: time="2025-11-01T00:23:08.599552426Z" level=info msg="StartContainer for \"af8f2bbae22228ceaa994d2f1a53878fa9bd9fea190c172421da03353f27788d\"" Nov 1 00:23:08.641627 systemd[1]: Started cri-containerd-af8f2bbae22228ceaa994d2f1a53878fa9bd9fea190c172421da03353f27788d.scope - libcontainer container af8f2bbae22228ceaa994d2f1a53878fa9bd9fea190c172421da03353f27788d. Nov 1 00:23:08.688303 containerd[1475]: time="2025-11-01T00:23:08.688174163Z" level=info msg="StartContainer for \"af8f2bbae22228ceaa994d2f1a53878fa9bd9fea190c172421da03353f27788d\" returns successfully" Nov 1 00:23:09.197667 containerd[1475]: time="2025-11-01T00:23:09.197609726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:09.198395 containerd[1475]: time="2025-11-01T00:23:09.198350513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:23:09.198832 containerd[1475]: time="2025-11-01T00:23:09.198775173Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:09.206312 containerd[1475]: time="2025-11-01T00:23:09.206280486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:09.207096 containerd[1475]: time="2025-11-01T00:23:09.207066914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 636.598503ms" Nov 1 00:23:09.207134 containerd[1475]: time="2025-11-01T00:23:09.207098255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:23:09.210731 containerd[1475]: time="2025-11-01T00:23:09.210704198Z" level=info msg="CreateContainer within sandbox \"306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:23:09.223365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763551924.mount: Deactivated successfully. Nov 1 00:23:09.223987 containerd[1475]: time="2025-11-01T00:23:09.223918042Z" level=info msg="CreateContainer within sandbox \"306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978\"" Nov 1 00:23:09.225203 containerd[1475]: time="2025-11-01T00:23:09.225183101Z" level=info msg="StartContainer for \"c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978\"" Nov 1 00:23:09.265614 systemd[1]: Started cri-containerd-c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978.scope - libcontainer container c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978. Nov 1 00:23:09.299743 containerd[1475]: time="2025-11-01T00:23:09.299698697Z" level=info msg="StartContainer for \"c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978\" returns successfully" Nov 1 00:23:09.318814 systemd[1]: cri-containerd-c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978.scope: Deactivated successfully. Nov 1 00:23:09.406307 containerd[1475]: time="2025-11-01T00:23:09.406233421Z" level=info msg="shim disconnected" id=c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978 namespace=k8s.io Nov 1 00:23:09.406307 containerd[1475]: time="2025-11-01T00:23:09.406286842Z" level=warning msg="cleaning up after shim disconnected" id=c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978 namespace=k8s.io Nov 1 00:23:09.406307 containerd[1475]: time="2025-11-01T00:23:09.406300272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:09.658882 kubelet[2543]: E1101 00:23:09.658264 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:09.663047 containerd[1475]: time="2025-11-01T00:23:09.662002291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:23:09.667669 kubelet[2543]: E1101 00:23:09.666702 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:09.908041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0d7c4b1b46c6652b6ecb9e7565f1effa417c8a3578ce09230ec460d81b9e978-rootfs.mount: Deactivated successfully. Nov 1 00:23:10.561841 kubelet[2543]: E1101 00:23:10.560989 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:10.666914 kubelet[2543]: I1101 00:23:10.666858 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:10.668181 kubelet[2543]: E1101 00:23:10.667158 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:11.824547 containerd[1475]: time="2025-11-01T00:23:11.824372956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:11.825915 containerd[1475]: time="2025-11-01T00:23:11.825879347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:23:11.828553 containerd[1475]: time="2025-11-01T00:23:11.827128642Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:11.830137 containerd[1475]: time="2025-11-01T00:23:11.830089892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:11.831474 containerd[1475]: time="2025-11-01T00:23:11.831438949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.169209342s" Nov 1 00:23:11.831588 containerd[1475]: time="2025-11-01T00:23:11.831569362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:23:11.838759 containerd[1475]: time="2025-11-01T00:23:11.838692696Z" level=info msg="CreateContainer within sandbox \"306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:23:11.856683 containerd[1475]: time="2025-11-01T00:23:11.856655510Z" level=info msg="CreateContainer within sandbox \"306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10\"" Nov 1 00:23:11.860021 containerd[1475]: time="2025-11-01T00:23:11.858769293Z" level=info msg="StartContainer for \"c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10\"" Nov 1 00:23:11.904792 systemd[1]: Started cri-containerd-c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10.scope - libcontainer container c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10. Nov 1 00:23:11.954789 containerd[1475]: time="2025-11-01T00:23:11.954747755Z" level=info msg="StartContainer for \"c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10\" returns successfully" Nov 1 00:23:12.521863 containerd[1475]: time="2025-11-01T00:23:12.521596772Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:23:12.525622 systemd[1]: cri-containerd-c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10.scope: Deactivated successfully. Nov 1 00:23:12.553955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10-rootfs.mount: Deactivated successfully. Nov 1 00:23:12.560636 kubelet[2543]: E1101 00:23:12.560573 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:12.587954 kubelet[2543]: I1101 00:23:12.587895 2543 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:23:12.595759 containerd[1475]: time="2025-11-01T00:23:12.595512984Z" level=info msg="shim disconnected" id=c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10 namespace=k8s.io Nov 1 00:23:12.595759 containerd[1475]: time="2025-11-01T00:23:12.595558265Z" level=warning msg="cleaning up after shim disconnected" id=c249339430ecb37e0de8378f4bb89aa2fb7c41c18efc0817d54c4ab647427d10 namespace=k8s.io Nov 1 00:23:12.595759 containerd[1475]: time="2025-11-01T00:23:12.595567485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:12.627063 containerd[1475]: time="2025-11-01T00:23:12.626407621Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 00:23:12.629620 kubelet[2543]: I1101 00:23:12.628236 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7fd7f8bc88-nnxhm" podStartSLOduration=5.294689205 podStartE2EDuration="6.628219375s" podCreationTimestamp="2025-11-01 00:23:06 +0000 UTC" firstStartedPulling="2025-11-01 00:23:07.236782597 +0000 UTC m=+19.789246503" lastFinishedPulling="2025-11-01 00:23:08.570312747 +0000 UTC m=+21.122776673" observedRunningTime="2025-11-01 00:23:09.788599117 +0000 UTC m=+22.341063023" watchObservedRunningTime="2025-11-01 00:23:12.628219375 +0000 UTC m=+25.180683281" Nov 1 00:23:12.644168 systemd[1]: Created slice kubepods-besteffort-podd94db435_8568_49d2_8fbb_f0e2ac2a0138.slice - libcontainer container kubepods-besteffort-podd94db435_8568_49d2_8fbb_f0e2ac2a0138.slice. Nov 1 00:23:12.662726 systemd[1]: Created slice kubepods-besteffort-pod2f5e2ac6_875f_4179_9d8d_01e4d536c5f3.slice - libcontainer container kubepods-besteffort-pod2f5e2ac6_875f_4179_9d8d_01e4d536c5f3.slice. Nov 1 00:23:12.680892 kubelet[2543]: I1101 00:23:12.680824 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d94db435-8568-49d2-8fbb-f0e2ac2a0138-calico-apiserver-certs\") pod \"calico-apiserver-6dd9845dcf-xh7sj\" (UID: \"d94db435-8568-49d2-8fbb-f0e2ac2a0138\") " pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" Nov 1 00:23:12.681534 kubelet[2543]: I1101 00:23:12.681284 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/590f4e1f-e213-4b72-aab5-d1ab9906213b-goldmane-ca-bundle\") pod \"goldmane-666569f655-lbk9p\" (UID: \"590f4e1f-e213-4b72-aab5-d1ab9906213b\") " pod="calico-system/goldmane-666569f655-lbk9p" Nov 1 00:23:12.681534 kubelet[2543]: I1101 00:23:12.681359 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f5e2ac6-875f-4179-9d8d-01e4d536c5f3-calico-apiserver-certs\") pod \"calico-apiserver-6dd9845dcf-vd6vw\" (UID: \"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3\") " pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" Nov 1 00:23:12.681534 kubelet[2543]: I1101 00:23:12.681517 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/590f4e1f-e213-4b72-aab5-d1ab9906213b-goldmane-key-pair\") pod \"goldmane-666569f655-lbk9p\" (UID: \"590f4e1f-e213-4b72-aab5-d1ab9906213b\") " pod="calico-system/goldmane-666569f655-lbk9p" Nov 1 00:23:12.681646 kubelet[2543]: I1101 00:23:12.681571 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntd2\" (UniqueName: \"kubernetes.io/projected/590f4e1f-e213-4b72-aab5-d1ab9906213b-kube-api-access-sntd2\") pod \"goldmane-666569f655-lbk9p\" (UID: \"590f4e1f-e213-4b72-aab5-d1ab9906213b\") " pod="calico-system/goldmane-666569f655-lbk9p" Nov 1 00:23:12.681646 kubelet[2543]: I1101 00:23:12.681629 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e77087b-330c-4d1c-8e6e-77f7214641fd-tigera-ca-bundle\") pod \"calico-kube-controllers-f84c65659-5v5f2\" (UID: \"2e77087b-330c-4d1c-8e6e-77f7214641fd\") " pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" Nov 1 00:23:12.681689 kubelet[2543]: I1101 00:23:12.681651 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcsf\" (UniqueName: \"kubernetes.io/projected/2e77087b-330c-4d1c-8e6e-77f7214641fd-kube-api-access-llcsf\") pod \"calico-kube-controllers-f84c65659-5v5f2\" (UID: \"2e77087b-330c-4d1c-8e6e-77f7214641fd\") " pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" Nov 1 00:23:12.681720 kubelet[2543]: I1101 00:23:12.681671 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485nk\" (UniqueName: \"kubernetes.io/projected/d94db435-8568-49d2-8fbb-f0e2ac2a0138-kube-api-access-485nk\") pod \"calico-apiserver-6dd9845dcf-xh7sj\" (UID: \"d94db435-8568-49d2-8fbb-f0e2ac2a0138\") " pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" Nov 1 00:23:12.681720 kubelet[2543]: I1101 00:23:12.681714 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2qkf\" (UniqueName: \"kubernetes.io/projected/2f5e2ac6-875f-4179-9d8d-01e4d536c5f3-kube-api-access-z2qkf\") pod \"calico-apiserver-6dd9845dcf-vd6vw\" (UID: \"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3\") " pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" Nov 1 00:23:12.681764 kubelet[2543]: I1101 00:23:12.681732 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/590f4e1f-e213-4b72-aab5-d1ab9906213b-config\") pod \"goldmane-666569f655-lbk9p\" (UID: \"590f4e1f-e213-4b72-aab5-d1ab9906213b\") " pod="calico-system/goldmane-666569f655-lbk9p" Nov 1 00:23:12.693777 kubelet[2543]: E1101 00:23:12.693115 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:12.694712 systemd[1]: Created slice kubepods-besteffort-pod2e77087b_330c_4d1c_8e6e_77f7214641fd.slice - libcontainer container kubepods-besteffort-pod2e77087b_330c_4d1c_8e6e_77f7214641fd.slice. Nov 1 00:23:12.699571 containerd[1475]: time="2025-11-01T00:23:12.699542649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:23:12.706925 systemd[1]: Created slice kubepods-besteffort-pod590f4e1f_e213_4b72_aab5_d1ab9906213b.slice - libcontainer container kubepods-besteffort-pod590f4e1f_e213_4b72_aab5_d1ab9906213b.slice. Nov 1 00:23:12.715553 systemd[1]: Created slice kubepods-besteffort-poda9d1b80c_5bb4_4c80_820b_54250043bfd7.slice - libcontainer container kubepods-besteffort-poda9d1b80c_5bb4_4c80_820b_54250043bfd7.slice. Nov 1 00:23:12.727814 systemd[1]: Created slice kubepods-besteffort-pod629a8271_4389_4e02_9056_efb21f586504.slice - libcontainer container kubepods-besteffort-pod629a8271_4389_4e02_9056_efb21f586504.slice. Nov 1 00:23:12.736349 systemd[1]: Created slice kubepods-burstable-podc83e6a8a_f958_47de_a7b8_4adca302cf7a.slice - libcontainer container kubepods-burstable-podc83e6a8a_f958_47de_a7b8_4adca302cf7a.slice. Nov 1 00:23:12.744766 systemd[1]: Created slice kubepods-burstable-pod780eceec_d826_43fc_b38c_894af01c17df.slice - libcontainer container kubepods-burstable-pod780eceec_d826_43fc_b38c_894af01c17df.slice. Nov 1 00:23:12.783055 kubelet[2543]: I1101 00:23:12.782954 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4c7\" (UniqueName: \"kubernetes.io/projected/629a8271-4389-4e02-9056-efb21f586504-kube-api-access-px4c7\") pod \"calico-apiserver-57cb94b8fc-kxpw2\" (UID: \"629a8271-4389-4e02-9056-efb21f586504\") " pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" Nov 1 00:23:12.783055 kubelet[2543]: I1101 00:23:12.782994 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-ca-bundle\") pod \"whisker-787b58b497-v8tsm\" (UID: \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\") " pod="calico-system/whisker-787b58b497-v8tsm" Nov 1 00:23:12.783055 kubelet[2543]: I1101 00:23:12.783011 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssptn\" (UniqueName: \"kubernetes.io/projected/a9d1b80c-5bb4-4c80-820b-54250043bfd7-kube-api-access-ssptn\") pod \"whisker-787b58b497-v8tsm\" (UID: \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\") " pod="calico-system/whisker-787b58b497-v8tsm" Nov 1 00:23:12.783055 kubelet[2543]: I1101 00:23:12.783042 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c83e6a8a-f958-47de-a7b8-4adca302cf7a-config-volume\") pod \"coredns-668d6bf9bc-2lq56\" (UID: \"c83e6a8a-f958-47de-a7b8-4adca302cf7a\") " pod="kube-system/coredns-668d6bf9bc-2lq56" Nov 1 00:23:12.783055 kubelet[2543]: I1101 00:23:12.783058 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzzq\" (UniqueName: \"kubernetes.io/projected/780eceec-d826-43fc-b38c-894af01c17df-kube-api-access-9jzzq\") pod \"coredns-668d6bf9bc-vq8rb\" (UID: \"780eceec-d826-43fc-b38c-894af01c17df\") " pod="kube-system/coredns-668d6bf9bc-vq8rb" Nov 1 00:23:12.783238 kubelet[2543]: I1101 00:23:12.783094 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq22c\" (UniqueName: \"kubernetes.io/projected/c83e6a8a-f958-47de-a7b8-4adca302cf7a-kube-api-access-sq22c\") pod \"coredns-668d6bf9bc-2lq56\" (UID: \"c83e6a8a-f958-47de-a7b8-4adca302cf7a\") " pod="kube-system/coredns-668d6bf9bc-2lq56" Nov 1 00:23:12.783238 kubelet[2543]: I1101 00:23:12.783173 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-backend-key-pair\") pod \"whisker-787b58b497-v8tsm\" (UID: \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\") " pod="calico-system/whisker-787b58b497-v8tsm" Nov 1 00:23:12.783238 kubelet[2543]: I1101 00:23:12.783197 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/629a8271-4389-4e02-9056-efb21f586504-calico-apiserver-certs\") pod \"calico-apiserver-57cb94b8fc-kxpw2\" (UID: \"629a8271-4389-4e02-9056-efb21f586504\") " pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" Nov 1 00:23:12.783238 kubelet[2543]: I1101 00:23:12.783210 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/780eceec-d826-43fc-b38c-894af01c17df-config-volume\") pod \"coredns-668d6bf9bc-vq8rb\" (UID: \"780eceec-d826-43fc-b38c-894af01c17df\") " pod="kube-system/coredns-668d6bf9bc-vq8rb" Nov 1 00:23:12.955694 containerd[1475]: time="2025-11-01T00:23:12.955314992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-xh7sj,Uid:d94db435-8568-49d2-8fbb-f0e2ac2a0138,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:12.981741 containerd[1475]: time="2025-11-01T00:23:12.981686423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-vd6vw,Uid:2f5e2ac6-875f-4179-9d8d-01e4d536c5f3,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:13.000625 containerd[1475]: time="2025-11-01T00:23:13.000599562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84c65659-5v5f2,Uid:2e77087b-330c-4d1c-8e6e-77f7214641fd,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:13.014331 containerd[1475]: time="2025-11-01T00:23:13.014066302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lbk9p,Uid:590f4e1f-e213-4b72-aab5-d1ab9906213b,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:13.020614 containerd[1475]: time="2025-11-01T00:23:13.020560278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787b58b497-v8tsm,Uid:a9d1b80c-5bb4-4c80-820b-54250043bfd7,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:13.032447 containerd[1475]: time="2025-11-01T00:23:13.032411729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57cb94b8fc-kxpw2,Uid:629a8271-4389-4e02-9056-efb21f586504,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:13.041628 kubelet[2543]: E1101 00:23:13.040673 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:13.042859 containerd[1475]: time="2025-11-01T00:23:13.042825734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lq56,Uid:c83e6a8a-f958-47de-a7b8-4adca302cf7a,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:13.052814 kubelet[2543]: E1101 00:23:13.051856 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:13.053232 containerd[1475]: time="2025-11-01T00:23:13.053101237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq8rb,Uid:780eceec-d826-43fc-b38c-894af01c17df,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:13.120931 containerd[1475]: time="2025-11-01T00:23:13.120868353Z" level=error msg="Failed to destroy network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.121576 containerd[1475]: time="2025-11-01T00:23:13.121548775Z" level=error msg="encountered an error cleaning up failed sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.121926 containerd[1475]: time="2025-11-01T00:23:13.121836320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-xh7sj,Uid:d94db435-8568-49d2-8fbb-f0e2ac2a0138,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.123370 kubelet[2543]: E1101 00:23:13.123325 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.123614 kubelet[2543]: E1101 00:23:13.123471 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" Nov 1 00:23:13.123772 kubelet[2543]: E1101 00:23:13.123697 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" Nov 1 00:23:13.123872 kubelet[2543]: E1101 00:23:13.123846 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dd9845dcf-xh7sj_calico-apiserver(d94db435-8568-49d2-8fbb-f0e2ac2a0138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dd9845dcf-xh7sj_calico-apiserver(d94db435-8568-49d2-8fbb-f0e2ac2a0138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:23:13.217922 containerd[1475]: time="2025-11-01T00:23:13.217557713Z" level=error msg="Failed to destroy network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.218057 containerd[1475]: time="2025-11-01T00:23:13.217975120Z" level=error msg="encountered an error cleaning up failed sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.218057 containerd[1475]: time="2025-11-01T00:23:13.218024251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-vd6vw,Uid:2f5e2ac6-875f-4179-9d8d-01e4d536c5f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.218530 kubelet[2543]: E1101 00:23:13.218293 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.218592 kubelet[2543]: E1101 00:23:13.218395 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" Nov 1 00:23:13.218592 kubelet[2543]: E1101 00:23:13.218555 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" Nov 1 00:23:13.218808 kubelet[2543]: E1101 00:23:13.218637 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dd9845dcf-vd6vw_calico-apiserver(2f5e2ac6-875f-4179-9d8d-01e4d536c5f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dd9845dcf-vd6vw_calico-apiserver(2f5e2ac6-875f-4179-9d8d-01e4d536c5f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:23:13.227552 containerd[1475]: time="2025-11-01T00:23:13.227524130Z" level=error msg="Failed to destroy network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.231823 containerd[1475]: time="2025-11-01T00:23:13.231795896Z" level=error msg="encountered an error cleaning up failed sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.232162 containerd[1475]: time="2025-11-01T00:23:13.231916768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lbk9p,Uid:590f4e1f-e213-4b72-aab5-d1ab9906213b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.232543 kubelet[2543]: E1101 00:23:13.232321 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.232543 kubelet[2543]: E1101 00:23:13.232387 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lbk9p" Nov 1 00:23:13.232543 kubelet[2543]: E1101 00:23:13.232411 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lbk9p" Nov 1 00:23:13.233558 kubelet[2543]: E1101 00:23:13.232461 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lbk9p_calico-system(590f4e1f-e213-4b72-aab5-d1ab9906213b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lbk9p_calico-system(590f4e1f-e213-4b72-aab5-d1ab9906213b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:23:13.237647 containerd[1475]: time="2025-11-01T00:23:13.237597459Z" level=error msg="Failed to destroy network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.238410 containerd[1475]: time="2025-11-01T00:23:13.238386873Z" level=error msg="encountered an error cleaning up failed sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.238572 containerd[1475]: time="2025-11-01T00:23:13.238525166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57cb94b8fc-kxpw2,Uid:629a8271-4389-4e02-9056-efb21f586504,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.239359 kubelet[2543]: E1101 00:23:13.239204 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.239359 kubelet[2543]: E1101 00:23:13.239255 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" Nov 1 00:23:13.239359 kubelet[2543]: E1101 00:23:13.239278 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" Nov 1 00:23:13.239456 kubelet[2543]: E1101 00:23:13.239325 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57cb94b8fc-kxpw2_calico-apiserver(629a8271-4389-4e02-9056-efb21f586504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57cb94b8fc-kxpw2_calico-apiserver(629a8271-4389-4e02-9056-efb21f586504)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:23:13.285539 containerd[1475]: time="2025-11-01T00:23:13.285478681Z" level=error msg="Failed to destroy network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.286653 containerd[1475]: time="2025-11-01T00:23:13.286629621Z" level=error msg="encountered an error cleaning up failed sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.286816 containerd[1475]: time="2025-11-01T00:23:13.286775874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lq56,Uid:c83e6a8a-f958-47de-a7b8-4adca302cf7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.287398 kubelet[2543]: E1101 00:23:13.287342 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.287712 kubelet[2543]: E1101 00:23:13.287400 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lq56" Nov 1 00:23:13.287793 kubelet[2543]: E1101 00:23:13.287713 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2lq56" Nov 1 00:23:13.288592 kubelet[2543]: E1101 00:23:13.288556 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2lq56_kube-system(c83e6a8a-f958-47de-a7b8-4adca302cf7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2lq56_kube-system(c83e6a8a-f958-47de-a7b8-4adca302cf7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2lq56" podUID="c83e6a8a-f958-47de-a7b8-4adca302cf7a" Nov 1 00:23:13.296562 containerd[1475]: time="2025-11-01T00:23:13.296343214Z" level=error msg="Failed to destroy network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.297165 containerd[1475]: time="2025-11-01T00:23:13.297122408Z" level=error msg="encountered an error cleaning up failed sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.297210 containerd[1475]: time="2025-11-01T00:23:13.297183399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84c65659-5v5f2,Uid:2e77087b-330c-4d1c-8e6e-77f7214641fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.298517 kubelet[2543]: E1101 00:23:13.298308 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.298517 kubelet[2543]: E1101 00:23:13.298345 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" Nov 1 00:23:13.298517 kubelet[2543]: E1101 00:23:13.298361 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" Nov 1 00:23:13.299936 kubelet[2543]: E1101 00:23:13.299692 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f84c65659-5v5f2_calico-system(2e77087b-330c-4d1c-8e6e-77f7214641fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f84c65659-5v5f2_calico-system(2e77087b-330c-4d1c-8e6e-77f7214641fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:23:13.310726 containerd[1475]: time="2025-11-01T00:23:13.310621638Z" level=error msg="Failed to destroy network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.311216 containerd[1475]: time="2025-11-01T00:23:13.311192848Z" level=error msg="encountered an error cleaning up failed sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.311365 containerd[1475]: time="2025-11-01T00:23:13.311292520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq8rb,Uid:780eceec-d826-43fc-b38c-894af01c17df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.311538 kubelet[2543]: E1101 00:23:13.311507 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.311590 kubelet[2543]: E1101 00:23:13.311550 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vq8rb" Nov 1 00:23:13.311623 kubelet[2543]: E1101 00:23:13.311571 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vq8rb" Nov 1 00:23:13.311652 kubelet[2543]: E1101 00:23:13.311624 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vq8rb_kube-system(780eceec-d826-43fc-b38c-894af01c17df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vq8rb_kube-system(780eceec-d826-43fc-b38c-894af01c17df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vq8rb" podUID="780eceec-d826-43fc-b38c-894af01c17df" Nov 1 00:23:13.317600 containerd[1475]: time="2025-11-01T00:23:13.317571212Z" level=error msg="Failed to destroy network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.317925 containerd[1475]: time="2025-11-01T00:23:13.317890148Z" level=error msg="encountered an error cleaning up failed sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.317971 containerd[1475]: time="2025-11-01T00:23:13.317935368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787b58b497-v8tsm,Uid:a9d1b80c-5bb4-4c80-820b-54250043bfd7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.318133 kubelet[2543]: E1101 00:23:13.318078 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.318133 kubelet[2543]: E1101 00:23:13.318114 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-787b58b497-v8tsm" Nov 1 00:23:13.318133 kubelet[2543]: E1101 00:23:13.318131 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-787b58b497-v8tsm" Nov 1 00:23:13.318264 kubelet[2543]: E1101 00:23:13.318156 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-787b58b497-v8tsm_calico-system(a9d1b80c-5bb4-4c80-820b-54250043bfd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-787b58b497-v8tsm_calico-system(a9d1b80c-5bb4-4c80-820b-54250043bfd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-787b58b497-v8tsm" podUID="a9d1b80c-5bb4-4c80-820b-54250043bfd7" Nov 1 00:23:13.695649 kubelet[2543]: I1101 00:23:13.695529 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:13.698529 containerd[1475]: time="2025-11-01T00:23:13.697826137Z" level=info msg="StopPodSandbox for \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\"" Nov 1 00:23:13.698529 containerd[1475]: time="2025-11-01T00:23:13.698084042Z" level=info msg="Ensure that sandbox 4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94 in task-service has been cleanup successfully" Nov 1 00:23:13.716411 kubelet[2543]: I1101 00:23:13.716365 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:13.720123 containerd[1475]: time="2025-11-01T00:23:13.720085803Z" level=info msg="StopPodSandbox for \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\"" Nov 1 00:23:13.720602 containerd[1475]: time="2025-11-01T00:23:13.720378348Z" level=info msg="Ensure that sandbox 8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd in task-service has been cleanup successfully" Nov 1 00:23:13.725300 kubelet[2543]: I1101 00:23:13.725264 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:13.727616 containerd[1475]: time="2025-11-01T00:23:13.727562086Z" level=info msg="StopPodSandbox for \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\"" Nov 1 00:23:13.727785 containerd[1475]: time="2025-11-01T00:23:13.727747949Z" level=info msg="Ensure that sandbox 12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5 in task-service has been cleanup successfully" Nov 1 00:23:13.730628 kubelet[2543]: I1101 00:23:13.730507 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:13.733564 containerd[1475]: time="2025-11-01T00:23:13.733534322Z" level=info msg="StopPodSandbox for \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\"" Nov 1 00:23:13.734945 containerd[1475]: time="2025-11-01T00:23:13.734792965Z" level=info msg="Ensure that sandbox 3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274 in task-service has been cleanup successfully" Nov 1 00:23:13.736731 kubelet[2543]: I1101 00:23:13.736704 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:13.738733 containerd[1475]: time="2025-11-01T00:23:13.738703894Z" level=info msg="StopPodSandbox for \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\"" Nov 1 00:23:13.738872 containerd[1475]: time="2025-11-01T00:23:13.738842487Z" level=info msg="Ensure that sandbox 51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714 in task-service has been cleanup successfully" Nov 1 00:23:13.743818 kubelet[2543]: I1101 00:23:13.743742 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:13.746063 containerd[1475]: time="2025-11-01T00:23:13.745805231Z" level=info msg="StopPodSandbox for \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\"" Nov 1 00:23:13.749085 containerd[1475]: time="2025-11-01T00:23:13.749039578Z" level=info msg="Ensure that sandbox 37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b in task-service has been cleanup successfully" Nov 1 00:23:13.753300 kubelet[2543]: I1101 00:23:13.753273 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:13.755354 containerd[1475]: time="2025-11-01T00:23:13.754617447Z" level=info msg="StopPodSandbox for \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\"" Nov 1 00:23:13.755354 containerd[1475]: time="2025-11-01T00:23:13.754742850Z" level=info msg="Ensure that sandbox 813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4 in task-service has been cleanup successfully" Nov 1 00:23:13.772763 kubelet[2543]: I1101 00:23:13.772736 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:13.776373 containerd[1475]: time="2025-11-01T00:23:13.776330784Z" level=info msg="StopPodSandbox for \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\"" Nov 1 00:23:13.777831 containerd[1475]: time="2025-11-01T00:23:13.777689128Z" level=info msg="Ensure that sandbox ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e in task-service has been cleanup successfully" Nov 1 00:23:13.831294 containerd[1475]: time="2025-11-01T00:23:13.831247781Z" level=error msg="StopPodSandbox for \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\" failed" error="failed to destroy network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.831858 kubelet[2543]: E1101 00:23:13.831637 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:13.831858 kubelet[2543]: E1101 00:23:13.831695 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5"} Nov 1 00:23:13.831858 kubelet[2543]: E1101 00:23:13.831749 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e77087b-330c-4d1c-8e6e-77f7214641fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.831858 kubelet[2543]: E1101 00:23:13.831771 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e77087b-330c-4d1c-8e6e-77f7214641fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:23:13.842113 containerd[1475]: time="2025-11-01T00:23:13.841658376Z" level=error msg="StopPodSandbox for \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\" failed" error="failed to destroy network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.842218 kubelet[2543]: E1101 00:23:13.841805 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:13.842218 kubelet[2543]: E1101 00:23:13.841835 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4"} Nov 1 00:23:13.842218 kubelet[2543]: E1101 00:23:13.841862 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.842218 kubelet[2543]: E1101 00:23:13.841887 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:23:13.880600 containerd[1475]: time="2025-11-01T00:23:13.880182001Z" level=error msg="StopPodSandbox for \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\" failed" error="failed to destroy network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.880805 kubelet[2543]: E1101 00:23:13.880401 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:13.880805 kubelet[2543]: E1101 00:23:13.880443 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94"} Nov 1 00:23:13.880805 kubelet[2543]: E1101 00:23:13.880499 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c83e6a8a-f958-47de-a7b8-4adca302cf7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.880805 kubelet[2543]: E1101 00:23:13.880530 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c83e6a8a-f958-47de-a7b8-4adca302cf7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2lq56" podUID="c83e6a8a-f958-47de-a7b8-4adca302cf7a" Nov 1 00:23:13.885736 containerd[1475]: time="2025-11-01T00:23:13.885343493Z" level=error msg="StopPodSandbox for \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\" failed" error="failed to destroy network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.885828 kubelet[2543]: E1101 00:23:13.885548 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:13.885828 kubelet[2543]: E1101 00:23:13.885613 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd"} Nov 1 00:23:13.885828 kubelet[2543]: E1101 00:23:13.885650 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.885828 kubelet[2543]: E1101 00:23:13.885701 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-787b58b497-v8tsm" podUID="a9d1b80c-5bb4-4c80-820b-54250043bfd7" Nov 1 00:23:13.897861 containerd[1475]: time="2025-11-01T00:23:13.897829145Z" level=error msg="StopPodSandbox for \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\" failed" error="failed to destroy network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.898461 kubelet[2543]: E1101 00:23:13.898327 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:13.898461 kubelet[2543]: E1101 00:23:13.898362 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e"} Nov 1 00:23:13.898461 kubelet[2543]: E1101 00:23:13.898388 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d94db435-8568-49d2-8fbb-f0e2ac2a0138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.898461 kubelet[2543]: E1101 00:23:13.898415 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d94db435-8568-49d2-8fbb-f0e2ac2a0138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:23:13.898826 containerd[1475]: time="2025-11-01T00:23:13.898793222Z" level=error msg="StopPodSandbox for \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\" failed" error="failed to destroy network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.899196 kubelet[2543]: E1101 00:23:13.899024 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:13.899196 kubelet[2543]: E1101 00:23:13.899098 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714"} Nov 1 00:23:13.899196 kubelet[2543]: E1101 00:23:13.899121 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"590f4e1f-e213-4b72-aab5-d1ab9906213b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.899196 kubelet[2543]: E1101 00:23:13.899169 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"590f4e1f-e213-4b72-aab5-d1ab9906213b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:23:13.906501 containerd[1475]: time="2025-11-01T00:23:13.900055215Z" level=error msg="StopPodSandbox for \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\" failed" error="failed to destroy network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.906501 containerd[1475]: time="2025-11-01T00:23:13.903237211Z" level=error msg="StopPodSandbox for \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\" failed" error="failed to destroy network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:13.906734 kubelet[2543]: E1101 00:23:13.905649 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:13.906734 kubelet[2543]: E1101 00:23:13.905694 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274"} Nov 1 00:23:13.906734 kubelet[2543]: E1101 00:23:13.905723 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"780eceec-d826-43fc-b38c-894af01c17df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.906734 kubelet[2543]: E1101 00:23:13.905743 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"780eceec-d826-43fc-b38c-894af01c17df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vq8rb" podUID="780eceec-d826-43fc-b38c-894af01c17df" Nov 1 00:23:13.906960 kubelet[2543]: E1101 00:23:13.905773 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:13.906960 kubelet[2543]: E1101 00:23:13.905787 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b"} Nov 1 00:23:13.906960 kubelet[2543]: E1101 00:23:13.905805 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"629a8271-4389-4e02-9056-efb21f586504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:13.906960 kubelet[2543]: E1101 00:23:13.905826 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"629a8271-4389-4e02-9056-efb21f586504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:23:14.571669 systemd[1]: Created slice kubepods-besteffort-pod42a33fba_271a_4a52_bba9_06d9d0613c0c.slice - libcontainer container kubepods-besteffort-pod42a33fba_271a_4a52_bba9_06d9d0613c0c.slice. Nov 1 00:23:14.575513 containerd[1475]: time="2025-11-01T00:23:14.575444071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nw6x5,Uid:42a33fba-271a-4a52-bba9-06d9d0613c0c,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:14.693015 containerd[1475]: time="2025-11-01T00:23:14.688478237Z" level=error msg="Failed to destroy network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:14.693015 containerd[1475]: time="2025-11-01T00:23:14.689186388Z" level=error msg="encountered an error cleaning up failed sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:14.693015 containerd[1475]: time="2025-11-01T00:23:14.689235349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nw6x5,Uid:42a33fba-271a-4a52-bba9-06d9d0613c0c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:14.693429 kubelet[2543]: E1101 00:23:14.692578 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:14.693429 kubelet[2543]: E1101 00:23:14.692623 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nw6x5" Nov 1 00:23:14.693429 kubelet[2543]: E1101 00:23:14.692645 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nw6x5" Nov 1 00:23:14.693579 kubelet[2543]: E1101 00:23:14.692678 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:14.696055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79-shm.mount: Deactivated successfully. Nov 1 00:23:14.775805 kubelet[2543]: I1101 00:23:14.775783 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:14.777243 containerd[1475]: time="2025-11-01T00:23:14.776877341Z" level=info msg="StopPodSandbox for \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\"" Nov 1 00:23:14.777999 containerd[1475]: time="2025-11-01T00:23:14.777764076Z" level=info msg="Ensure that sandbox bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79 in task-service has been cleanup successfully" Nov 1 00:23:14.831150 containerd[1475]: time="2025-11-01T00:23:14.830852111Z" level=error msg="StopPodSandbox for \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\" failed" error="failed to destroy network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:14.831629 kubelet[2543]: E1101 00:23:14.831401 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:14.831629 kubelet[2543]: E1101 00:23:14.831451 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79"} Nov 1 00:23:14.831629 kubelet[2543]: E1101 00:23:14.831510 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42a33fba-271a-4a52-bba9-06d9d0613c0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:14.831629 kubelet[2543]: E1101 00:23:14.831539 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42a33fba-271a-4a52-bba9-06d9d0613c0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:16.782731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1768141371.mount: Deactivated successfully. Nov 1 00:23:16.812564 containerd[1475]: time="2025-11-01T00:23:16.812112156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:16.813214 containerd[1475]: time="2025-11-01T00:23:16.813096320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:23:16.814919 containerd[1475]: time="2025-11-01T00:23:16.813954003Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:16.815518 containerd[1475]: time="2025-11-01T00:23:16.815439024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:16.817408 containerd[1475]: time="2025-11-01T00:23:16.817373733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.117701342s" Nov 1 00:23:16.817448 containerd[1475]: time="2025-11-01T00:23:16.817409733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:23:16.839159 containerd[1475]: time="2025-11-01T00:23:16.839119071Z" level=info msg="CreateContainer within sandbox \"306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:23:16.861798 containerd[1475]: time="2025-11-01T00:23:16.861763913Z" level=info msg="CreateContainer within sandbox \"306d1c696c2adb8dc3628ce18226706ba315547dd484459b35c1501af82af798\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dba1c4af100488c1f388761fe3660db06b758a9cb7c763727ea03bf75a0a7135\"" Nov 1 00:23:16.862895 containerd[1475]: time="2025-11-01T00:23:16.862869670Z" level=info msg="StartContainer for \"dba1c4af100488c1f388761fe3660db06b758a9cb7c763727ea03bf75a0a7135\"" Nov 1 00:23:16.903839 systemd[1]: Started cri-containerd-dba1c4af100488c1f388761fe3660db06b758a9cb7c763727ea03bf75a0a7135.scope - libcontainer container dba1c4af100488c1f388761fe3660db06b758a9cb7c763727ea03bf75a0a7135. Nov 1 00:23:16.941081 containerd[1475]: time="2025-11-01T00:23:16.941038645Z" level=info msg="StartContainer for \"dba1c4af100488c1f388761fe3660db06b758a9cb7c763727ea03bf75a0a7135\" returns successfully" Nov 1 00:23:17.040647 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:23:17.040765 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:23:17.194606 containerd[1475]: time="2025-11-01T00:23:17.193037304Z" level=info msg="StopPodSandbox for \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\"" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.299 [INFO][3773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.300 [INFO][3773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" iface="eth0" netns="/var/run/netns/cni-293cc21e-4c95-cd01-14c6-2330b0f46cec" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.301 [INFO][3773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" iface="eth0" netns="/var/run/netns/cni-293cc21e-4c95-cd01-14c6-2330b0f46cec" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.301 [INFO][3773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" iface="eth0" netns="/var/run/netns/cni-293cc21e-4c95-cd01-14c6-2330b0f46cec" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.301 [INFO][3773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.301 [INFO][3773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.330 [INFO][3781] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.330 [INFO][3781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.330 [INFO][3781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.336 [WARNING][3781] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.337 [INFO][3781] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.338 [INFO][3781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:17.344704 containerd[1475]: 2025-11-01 00:23:17.342 [INFO][3773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:17.345557 containerd[1475]: time="2025-11-01T00:23:17.344854700Z" level=info msg="TearDown network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\" successfully" Nov 1 00:23:17.345557 containerd[1475]: time="2025-11-01T00:23:17.344880051Z" level=info msg="StopPodSandbox for \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\" returns successfully" Nov 1 00:23:17.423749 kubelet[2543]: I1101 00:23:17.423704 2543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-ca-bundle\") pod \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\" (UID: \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\") " Nov 1 00:23:17.425502 kubelet[2543]: I1101 00:23:17.424700 2543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-backend-key-pair\") pod \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\" (UID: \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\") " Nov 1 00:23:17.425502 kubelet[2543]: I1101 00:23:17.424742 2543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssptn\" (UniqueName: \"kubernetes.io/projected/a9d1b80c-5bb4-4c80-820b-54250043bfd7-kube-api-access-ssptn\") pod \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\" (UID: \"a9d1b80c-5bb4-4c80-820b-54250043bfd7\") " Nov 1 00:23:17.430930 kubelet[2543]: I1101 00:23:17.430195 2543 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a9d1b80c-5bb4-4c80-820b-54250043bfd7" (UID: "a9d1b80c-5bb4-4c80-820b-54250043bfd7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:17.434241 kubelet[2543]: I1101 00:23:17.434198 2543 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d1b80c-5bb4-4c80-820b-54250043bfd7-kube-api-access-ssptn" (OuterVolumeSpecName: "kube-api-access-ssptn") pod "a9d1b80c-5bb4-4c80-820b-54250043bfd7" (UID: "a9d1b80c-5bb4-4c80-820b-54250043bfd7"). InnerVolumeSpecName "kube-api-access-ssptn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:17.436751 kubelet[2543]: I1101 00:23:17.436720 2543 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a9d1b80c-5bb4-4c80-820b-54250043bfd7" (UID: "a9d1b80c-5bb4-4c80-820b-54250043bfd7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:23:17.526254 kubelet[2543]: I1101 00:23:17.526169 2543 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssptn\" (UniqueName: \"kubernetes.io/projected/a9d1b80c-5bb4-4c80-820b-54250043bfd7-kube-api-access-ssptn\") on node \"172-237-159-149\" DevicePath \"\"" Nov 1 00:23:17.526254 kubelet[2543]: I1101 00:23:17.526223 2543 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-ca-bundle\") on node \"172-237-159-149\" DevicePath \"\"" Nov 1 00:23:17.526254 kubelet[2543]: I1101 00:23:17.526237 2543 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9d1b80c-5bb4-4c80-820b-54250043bfd7-whisker-backend-key-pair\") on node \"172-237-159-149\" DevicePath \"\"" Nov 1 00:23:17.571307 systemd[1]: Removed slice kubepods-besteffort-poda9d1b80c_5bb4_4c80_820b_54250043bfd7.slice - libcontainer container kubepods-besteffort-poda9d1b80c_5bb4_4c80_820b_54250043bfd7.slice. Nov 1 00:23:17.780068 systemd[1]: run-netns-cni\x2d293cc21e\x2d4c95\x2dcd01\x2d14c6\x2d2330b0f46cec.mount: Deactivated successfully. Nov 1 00:23:17.780374 systemd[1]: var-lib-kubelet-pods-a9d1b80c\x2d5bb4\x2d4c80\x2d820b\x2d54250043bfd7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssptn.mount: Deactivated successfully. Nov 1 00:23:17.780453 systemd[1]: var-lib-kubelet-pods-a9d1b80c\x2d5bb4\x2d4c80\x2d820b\x2d54250043bfd7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:23:17.788174 kubelet[2543]: E1101 00:23:17.786954 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:17.803013 kubelet[2543]: I1101 00:23:17.802754 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7ldz2" podStartSLOduration=2.388461754 podStartE2EDuration="11.802739723s" podCreationTimestamp="2025-11-01 00:23:06 +0000 UTC" firstStartedPulling="2025-11-01 00:23:07.403831214 +0000 UTC m=+19.956295130" lastFinishedPulling="2025-11-01 00:23:16.818109193 +0000 UTC m=+29.370573099" observedRunningTime="2025-11-01 00:23:17.802475829 +0000 UTC m=+30.354939745" watchObservedRunningTime="2025-11-01 00:23:17.802739723 +0000 UTC m=+30.355203659" Nov 1 00:23:17.862578 systemd[1]: Created slice kubepods-besteffort-podd81fb5e0_40d2_4201_bb4f_f47b80daaf86.slice - libcontainer container kubepods-besteffort-podd81fb5e0_40d2_4201_bb4f_f47b80daaf86.slice. Nov 1 00:23:17.929820 kubelet[2543]: I1101 00:23:17.929759 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5cpz\" (UniqueName: \"kubernetes.io/projected/d81fb5e0-40d2-4201-bb4f-f47b80daaf86-kube-api-access-d5cpz\") pod \"whisker-7546cfc797-vghbp\" (UID: \"d81fb5e0-40d2-4201-bb4f-f47b80daaf86\") " pod="calico-system/whisker-7546cfc797-vghbp" Nov 1 00:23:17.929820 kubelet[2543]: I1101 00:23:17.929818 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d81fb5e0-40d2-4201-bb4f-f47b80daaf86-whisker-ca-bundle\") pod \"whisker-7546cfc797-vghbp\" (UID: \"d81fb5e0-40d2-4201-bb4f-f47b80daaf86\") " pod="calico-system/whisker-7546cfc797-vghbp" Nov 1 00:23:17.930146 kubelet[2543]: I1101 00:23:17.929845 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d81fb5e0-40d2-4201-bb4f-f47b80daaf86-whisker-backend-key-pair\") pod \"whisker-7546cfc797-vghbp\" (UID: \"d81fb5e0-40d2-4201-bb4f-f47b80daaf86\") " pod="calico-system/whisker-7546cfc797-vghbp" Nov 1 00:23:18.169774 containerd[1475]: time="2025-11-01T00:23:18.169057673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7546cfc797-vghbp,Uid:d81fb5e0-40d2-4201-bb4f-f47b80daaf86,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:18.308765 systemd-networkd[1377]: calia08f7410687: Link UP Nov 1 00:23:18.309036 systemd-networkd[1377]: calia08f7410687: Gained carrier Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.215 [INFO][3805] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.226 [INFO][3805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0 whisker-7546cfc797- calico-system d81fb5e0-40d2-4201-bb4f-f47b80daaf86 899 0 2025-11-01 00:23:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7546cfc797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-159-149 whisker-7546cfc797-vghbp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia08f7410687 [] [] }} ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.226 [INFO][3805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.250 [INFO][3816] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" HandleID="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Workload="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.250 [INFO][3816] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" HandleID="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Workload="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-159-149", "pod":"whisker-7546cfc797-vghbp", "timestamp":"2025-11-01 00:23:18.250416571 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.250 [INFO][3816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.250 [INFO][3816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.251 [INFO][3816] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.257 [INFO][3816] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.268 [INFO][3816] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.273 [INFO][3816] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.275 [INFO][3816] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.277 [INFO][3816] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.277 [INFO][3816] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.278 [INFO][3816] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.284 [INFO][3816] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.288 [INFO][3816] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.65/26] block=192.168.65.64/26 handle="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.288 [INFO][3816] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.65/26] handle="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" host="172-237-159-149" Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.288 [INFO][3816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.329069 containerd[1475]: 2025-11-01 00:23:18.288 [INFO][3816] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.65/26] IPv6=[] ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" HandleID="k8s-pod-network.6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Workload="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" Nov 1 00:23:18.329939 containerd[1475]: 2025-11-01 00:23:18.291 [INFO][3805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0", GenerateName:"whisker-7546cfc797-", Namespace:"calico-system", SelfLink:"", UID:"d81fb5e0-40d2-4201-bb4f-f47b80daaf86", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7546cfc797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"whisker-7546cfc797-vghbp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia08f7410687", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.329939 containerd[1475]: 2025-11-01 00:23:18.291 [INFO][3805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.65/32] ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" Nov 1 00:23:18.329939 containerd[1475]: 2025-11-01 00:23:18.291 [INFO][3805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia08f7410687 ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" Nov 1 00:23:18.329939 containerd[1475]: 2025-11-01 00:23:18.309 [INFO][3805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" Nov 1 00:23:18.329939 containerd[1475]: 2025-11-01 00:23:18.310 [INFO][3805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0", GenerateName:"whisker-7546cfc797-", Namespace:"calico-system", SelfLink:"", UID:"d81fb5e0-40d2-4201-bb4f-f47b80daaf86", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7546cfc797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd", Pod:"whisker-7546cfc797-vghbp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia08f7410687", MAC:"82:04:01:a3:47:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.329939 containerd[1475]: 2025-11-01 00:23:18.322 [INFO][3805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd" Namespace="calico-system" Pod="whisker-7546cfc797-vghbp" WorkloadEndpoint="172--237--159--149-k8s-whisker--7546cfc797--vghbp-eth0" Nov 1 00:23:18.350225 containerd[1475]: time="2025-11-01T00:23:18.349697470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:18.350225 containerd[1475]: time="2025-11-01T00:23:18.349767371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:18.350225 containerd[1475]: time="2025-11-01T00:23:18.349780542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.350225 containerd[1475]: time="2025-11-01T00:23:18.349870463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.372647 systemd[1]: Started cri-containerd-6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd.scope - libcontainer container 6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd. Nov 1 00:23:18.414686 containerd[1475]: time="2025-11-01T00:23:18.414616157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7546cfc797-vghbp,Uid:d81fb5e0-40d2-4201-bb4f-f47b80daaf86,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b624d7113579db8e8ff8a8d3399a3f3abc685484ec220b43ea78f133c1f50fd\"" Nov 1 00:23:18.417967 containerd[1475]: time="2025-11-01T00:23:18.417940240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:18.565640 containerd[1475]: time="2025-11-01T00:23:18.565579952Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:18.566734 containerd[1475]: time="2025-11-01T00:23:18.566694326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:18.566892 containerd[1475]: time="2025-11-01T00:23:18.566779857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:18.566979 kubelet[2543]: E1101 00:23:18.566929 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:18.567429 kubelet[2543]: E1101 00:23:18.566995 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:18.572733 kubelet[2543]: E1101 00:23:18.572279 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4826801892e941d495724508c51c8278,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:18.575855 containerd[1475]: time="2025-11-01T00:23:18.575362958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:18.713824 containerd[1475]: time="2025-11-01T00:23:18.713766191Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:18.715047 containerd[1475]: time="2025-11-01T00:23:18.714996467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:18.715118 containerd[1475]: time="2025-11-01T00:23:18.715088108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:18.715361 kubelet[2543]: E1101 00:23:18.715316 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:18.715445 kubelet[2543]: E1101 00:23:18.715370 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:18.715542 kubelet[2543]: E1101 00:23:18.715469 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:18.716922 kubelet[2543]: E1101 00:23:18.716809 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:23:18.793041 kubelet[2543]: I1101 00:23:18.792975 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:18.793381 kubelet[2543]: E1101 00:23:18.793330 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:18.796693 kubelet[2543]: E1101 00:23:18.796627 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:23:19.567656 kubelet[2543]: I1101 00:23:19.566366 2543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d1b80c-5bb4-4c80-820b-54250043bfd7" path="/var/lib/kubelet/pods/a9d1b80c-5bb4-4c80-820b-54250043bfd7/volumes" Nov 1 00:23:19.798372 kubelet[2543]: E1101 00:23:19.798206 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:23:20.036861 systemd-networkd[1377]: calia08f7410687: Gained IPv6LL Nov 1 00:23:24.692237 kubelet[2543]: I1101 00:23:24.691611 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:24.692874 kubelet[2543]: E1101 00:23:24.692232 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:24.805947 kubelet[2543]: E1101 00:23:24.805348 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:25.206555 kernel: bpftool[4102]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:23:25.577157 systemd-networkd[1377]: vxlan.calico: Link UP Nov 1 00:23:25.577166 systemd-networkd[1377]: vxlan.calico: Gained carrier Nov 1 00:23:26.562061 containerd[1475]: time="2025-11-01T00:23:26.561531806Z" level=info msg="StopPodSandbox for \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\"" Nov 1 00:23:26.562061 containerd[1475]: time="2025-11-01T00:23:26.561783438Z" level=info msg="StopPodSandbox for \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\"" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.630 [INFO][4229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.630 [INFO][4229] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" iface="eth0" netns="/var/run/netns/cni-8d4a4e33-32b0-dabb-70c7-b094217dba89" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.632 [INFO][4229] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" iface="eth0" netns="/var/run/netns/cni-8d4a4e33-32b0-dabb-70c7-b094217dba89" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.633 [INFO][4229] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" iface="eth0" netns="/var/run/netns/cni-8d4a4e33-32b0-dabb-70c7-b094217dba89" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.633 [INFO][4229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.634 [INFO][4229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.686 [INFO][4243] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.686 [INFO][4243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.686 [INFO][4243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.692 [WARNING][4243] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.694 [INFO][4243] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.696 [INFO][4243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:26.715515 containerd[1475]: 2025-11-01 00:23:26.705 [INFO][4229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:26.717524 containerd[1475]: time="2025-11-01T00:23:26.716742379Z" level=info msg="TearDown network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\" successfully" Nov 1 00:23:26.717524 containerd[1475]: time="2025-11-01T00:23:26.716773909Z" level=info msg="StopPodSandbox for \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\" returns successfully" Nov 1 00:23:26.719511 kubelet[2543]: E1101 00:23:26.718173 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:26.718602 systemd[1]: run-netns-cni\x2d8d4a4e33\x2d32b0\x2ddabb\x2d70c7\x2db094217dba89.mount: Deactivated successfully. Nov 1 00:23:26.722140 containerd[1475]: time="2025-11-01T00:23:26.722093290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lq56,Uid:c83e6a8a-f958-47de-a7b8-4adca302cf7a,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.657 [INFO][4230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.658 [INFO][4230] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" iface="eth0" netns="/var/run/netns/cni-2bade2c6-af02-a6f4-6c1e-4647ed748ed7" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.660 [INFO][4230] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" iface="eth0" netns="/var/run/netns/cni-2bade2c6-af02-a6f4-6c1e-4647ed748ed7" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.662 [INFO][4230] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" iface="eth0" netns="/var/run/netns/cni-2bade2c6-af02-a6f4-6c1e-4647ed748ed7" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.663 [INFO][4230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.663 [INFO][4230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.699 [INFO][4249] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.699 [INFO][4249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.700 [INFO][4249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.707 [WARNING][4249] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.707 [INFO][4249] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.709 [INFO][4249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:26.731060 containerd[1475]: 2025-11-01 00:23:26.724 [INFO][4230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:26.731758 containerd[1475]: time="2025-11-01T00:23:26.731178780Z" level=info msg="TearDown network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\" successfully" Nov 1 00:23:26.731758 containerd[1475]: time="2025-11-01T00:23:26.731195550Z" level=info msg="StopPodSandbox for \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\" returns successfully" Nov 1 00:23:26.733736 containerd[1475]: time="2025-11-01T00:23:26.733677949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84c65659-5v5f2,Uid:2e77087b-330c-4d1c-8e6e-77f7214641fd,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:26.735966 systemd[1]: run-netns-cni\x2d2bade2c6\x2daf02\x2da6f4\x2d6c1e\x2d4647ed748ed7.mount: Deactivated successfully. Nov 1 00:23:26.899282 systemd-networkd[1377]: cali98388399576: Link UP Nov 1 00:23:26.902634 systemd-networkd[1377]: cali98388399576: Gained carrier Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.802 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0 coredns-668d6bf9bc- kube-system c83e6a8a-f958-47de-a7b8-4adca302cf7a 952 0 2025-11-01 00:22:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-159-149 coredns-668d6bf9bc-2lq56 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali98388399576 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.802 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.850 [INFO][4281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" HandleID="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.850 [INFO][4281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" HandleID="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-159-149", "pod":"coredns-668d6bf9bc-2lq56", "timestamp":"2025-11-01 00:23:26.850131405 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.850 [INFO][4281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.850 [INFO][4281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.850 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.858 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.865 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.870 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.872 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.874 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.874 [INFO][4281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.876 [INFO][4281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7 Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.880 [INFO][4281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.887 [INFO][4281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.66/26] block=192.168.65.64/26 handle="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.887 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.66/26] handle="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" host="172-237-159-149" Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.888 [INFO][4281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:26.925027 containerd[1475]: 2025-11-01 00:23:26.888 [INFO][4281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.66/26] IPv6=[] ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" HandleID="k8s-pod-network.d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.925601 containerd[1475]: 2025-11-01 00:23:26.892 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83e6a8a-f958-47de-a7b8-4adca302cf7a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"coredns-668d6bf9bc-2lq56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98388399576", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:26.925601 containerd[1475]: 2025-11-01 00:23:26.892 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.66/32] ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.925601 containerd[1475]: 2025-11-01 00:23:26.893 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98388399576 ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.925601 containerd[1475]: 2025-11-01 00:23:26.904 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.925601 containerd[1475]: 2025-11-01 00:23:26.905 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83e6a8a-f958-47de-a7b8-4adca302cf7a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7", Pod:"coredns-668d6bf9bc-2lq56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98388399576", MAC:"be:9f:20:31:42:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:26.925601 containerd[1475]: 2025-11-01 00:23:26.918 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-2lq56" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:26.953945 containerd[1475]: time="2025-11-01T00:23:26.953845342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:26.955468 containerd[1475]: time="2025-11-01T00:23:26.955110472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:26.955468 containerd[1475]: time="2025-11-01T00:23:26.955154522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:26.955468 containerd[1475]: time="2025-11-01T00:23:26.955314933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:26.988077 systemd[1]: Started cri-containerd-d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7.scope - libcontainer container d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7. Nov 1 00:23:27.011656 systemd-networkd[1377]: calib581a13117f: Link UP Nov 1 00:23:27.013360 systemd-networkd[1377]: calib581a13117f: Gained carrier Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.815 [INFO][4268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0 calico-kube-controllers-f84c65659- calico-system 2e77087b-330c-4d1c-8e6e-77f7214641fd 953 0 2025-11-01 00:23:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f84c65659 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-159-149 calico-kube-controllers-f84c65659-5v5f2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib581a13117f [] [] }} ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.816 [INFO][4268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.867 [INFO][4286] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" HandleID="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.867 [INFO][4286] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" HandleID="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-159-149", "pod":"calico-kube-controllers-f84c65659-5v5f2", "timestamp":"2025-11-01 00:23:26.867144075 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.868 [INFO][4286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.888 [INFO][4286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.888 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.959 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.966 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.973 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.975 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.978 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.978 [INFO][4286] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.980 [INFO][4286] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652 Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.987 [INFO][4286] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.995 [INFO][4286] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.67/26] block=192.168.65.64/26 handle="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.996 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.67/26] handle="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" host="172-237-159-149" Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.997 [INFO][4286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:27.046937 containerd[1475]: 2025-11-01 00:23:26.997 [INFO][4286] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.67/26] IPv6=[] ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" HandleID="k8s-pod-network.a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:27.047508 containerd[1475]: 2025-11-01 00:23:27.003 [INFO][4268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0", GenerateName:"calico-kube-controllers-f84c65659-", Namespace:"calico-system", SelfLink:"", UID:"2e77087b-330c-4d1c-8e6e-77f7214641fd", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84c65659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"calico-kube-controllers-f84c65659-5v5f2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib581a13117f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:27.047508 containerd[1475]: 2025-11-01 00:23:27.003 [INFO][4268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.67/32] ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:27.047508 containerd[1475]: 2025-11-01 00:23:27.003 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib581a13117f ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:27.047508 containerd[1475]: 2025-11-01 00:23:27.018 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:27.047508 containerd[1475]: 2025-11-01 00:23:27.024 [INFO][4268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0", GenerateName:"calico-kube-controllers-f84c65659-", Namespace:"calico-system", SelfLink:"", UID:"2e77087b-330c-4d1c-8e6e-77f7214641fd", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84c65659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652", Pod:"calico-kube-controllers-f84c65659-5v5f2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib581a13117f", MAC:"a2:19:82:4c:19:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:27.047508 containerd[1475]: 2025-11-01 00:23:27.040 [INFO][4268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652" Namespace="calico-system" Pod="calico-kube-controllers-f84c65659-5v5f2" WorkloadEndpoint="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:27.082396 containerd[1475]: time="2025-11-01T00:23:27.082162830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2lq56,Uid:c83e6a8a-f958-47de-a7b8-4adca302cf7a,Namespace:kube-system,Attempt:1,} returns sandbox id \"d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7\"" Nov 1 00:23:27.087234 kubelet[2543]: E1101 00:23:27.086716 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:27.093047 containerd[1475]: time="2025-11-01T00:23:27.092860827Z" level=info msg="CreateContainer within sandbox \"d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:27.109410 containerd[1475]: time="2025-11-01T00:23:27.108787682Z" level=info msg="CreateContainer within sandbox \"d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b9b20db2ddf80de1b6108b39d9fca778388e7d5e94f1a6028c9aa10de8aca83\"" Nov 1 00:23:27.111970 containerd[1475]: time="2025-11-01T00:23:27.111904914Z" level=info msg="StartContainer for \"0b9b20db2ddf80de1b6108b39d9fca778388e7d5e94f1a6028c9aa10de8aca83\"" Nov 1 00:23:27.119554 containerd[1475]: time="2025-11-01T00:23:27.118712683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:27.119554 containerd[1475]: time="2025-11-01T00:23:27.118793724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:27.119554 containerd[1475]: time="2025-11-01T00:23:27.118806094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:27.119554 containerd[1475]: time="2025-11-01T00:23:27.118886344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:27.141350 systemd-networkd[1377]: vxlan.calico: Gained IPv6LL Nov 1 00:23:27.149730 systemd[1]: Started cri-containerd-a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652.scope - libcontainer container a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652. Nov 1 00:23:27.174638 systemd[1]: Started cri-containerd-0b9b20db2ddf80de1b6108b39d9fca778388e7d5e94f1a6028c9aa10de8aca83.scope - libcontainer container 0b9b20db2ddf80de1b6108b39d9fca778388e7d5e94f1a6028c9aa10de8aca83. Nov 1 00:23:27.216955 containerd[1475]: time="2025-11-01T00:23:27.216902081Z" level=info msg="StartContainer for \"0b9b20db2ddf80de1b6108b39d9fca778388e7d5e94f1a6028c9aa10de8aca83\" returns successfully" Nov 1 00:23:27.263038 containerd[1475]: time="2025-11-01T00:23:27.262879622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84c65659-5v5f2,Uid:2e77087b-330c-4d1c-8e6e-77f7214641fd,Namespace:calico-system,Attempt:1,} returns sandbox id \"a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652\"" Nov 1 00:23:27.269930 containerd[1475]: time="2025-11-01T00:23:27.269675391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:27.371205 kubelet[2543]: I1101 00:23:27.371150 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:27.371699 kubelet[2543]: E1101 00:23:27.371653 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:27.409931 containerd[1475]: time="2025-11-01T00:23:27.409769991Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:27.411430 containerd[1475]: time="2025-11-01T00:23:27.411293932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:27.411732 containerd[1475]: time="2025-11-01T00:23:27.411408943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:27.412157 kubelet[2543]: E1101 00:23:27.411908 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:27.412157 kubelet[2543]: E1101 00:23:27.411961 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:27.412157 kubelet[2543]: E1101 00:23:27.412112 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llcsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f84c65659-5v5f2_calico-system(2e77087b-330c-4d1c-8e6e-77f7214641fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:27.413561 kubelet[2543]: E1101 00:23:27.413529 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:23:27.566205 containerd[1475]: time="2025-11-01T00:23:27.565739275Z" level=info msg="StopPodSandbox for \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\"" Nov 1 00:23:27.570039 containerd[1475]: time="2025-11-01T00:23:27.569980686Z" level=info msg="StopPodSandbox for \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\"" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.676 [INFO][4494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.678 [INFO][4494] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" iface="eth0" netns="/var/run/netns/cni-deb2e876-7beb-65cb-2062-08e279f59ca3" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.679 [INFO][4494] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" iface="eth0" netns="/var/run/netns/cni-deb2e876-7beb-65cb-2062-08e279f59ca3" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.682 [INFO][4494] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" iface="eth0" netns="/var/run/netns/cni-deb2e876-7beb-65cb-2062-08e279f59ca3" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.682 [INFO][4494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.682 [INFO][4494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.723 [INFO][4513] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.724 [INFO][4513] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.724 [INFO][4513] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.738 [WARNING][4513] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.738 [INFO][4513] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.741 [INFO][4513] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:27.760813 containerd[1475]: 2025-11-01 00:23:27.754 [INFO][4494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:27.762279 containerd[1475]: time="2025-11-01T00:23:27.761692887Z" level=info msg="TearDown network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\" successfully" Nov 1 00:23:27.762279 containerd[1475]: time="2025-11-01T00:23:27.761763738Z" level=info msg="StopPodSandbox for \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\" returns successfully" Nov 1 00:23:27.764849 containerd[1475]: time="2025-11-01T00:23:27.764695109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57cb94b8fc-kxpw2,Uid:629a8271-4389-4e02-9056-efb21f586504,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:27.766754 systemd[1]: run-netns-cni\x2ddeb2e876\x2d7beb\x2d65cb\x2d2062\x2d08e279f59ca3.mount: Deactivated successfully. Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.697 [INFO][4502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.698 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" iface="eth0" netns="/var/run/netns/cni-49223854-7a30-df24-e6e6-3e831f5d9a20" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.701 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" iface="eth0" netns="/var/run/netns/cni-49223854-7a30-df24-e6e6-3e831f5d9a20" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.704 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" iface="eth0" netns="/var/run/netns/cni-49223854-7a30-df24-e6e6-3e831f5d9a20" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.704 [INFO][4502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.704 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.773 [INFO][4518] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.773 [INFO][4518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.773 [INFO][4518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.787 [WARNING][4518] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.787 [INFO][4518] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.791 [INFO][4518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:27.805123 containerd[1475]: 2025-11-01 00:23:27.798 [INFO][4502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:27.805123 containerd[1475]: time="2025-11-01T00:23:27.804754157Z" level=info msg="TearDown network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\" successfully" Nov 1 00:23:27.805123 containerd[1475]: time="2025-11-01T00:23:27.804780838Z" level=info msg="StopPodSandbox for \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\" returns successfully" Nov 1 00:23:27.808619 containerd[1475]: time="2025-11-01T00:23:27.808578795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lbk9p,Uid:590f4e1f-e213-4b72-aab5-d1ab9906213b,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:27.812228 systemd[1]: run-netns-cni\x2d49223854\x2d7a30\x2ddf24\x2de6e6\x2d3e831f5d9a20.mount: Deactivated successfully. Nov 1 00:23:27.820176 kubelet[2543]: E1101 00:23:27.820147 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:27.836774 kubelet[2543]: E1101 00:23:27.836122 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:27.851312 kubelet[2543]: I1101 00:23:27.851241 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2lq56" podStartSLOduration=35.851215502 podStartE2EDuration="35.851215502s" podCreationTimestamp="2025-11-01 00:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:27.847862128 +0000 UTC m=+40.400326054" watchObservedRunningTime="2025-11-01 00:23:27.851215502 +0000 UTC m=+40.403679408" Nov 1 00:23:27.867267 kubelet[2543]: E1101 00:23:27.866139 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:23:27.971758 systemd-networkd[1377]: cali98388399576: Gained IPv6LL Nov 1 00:23:28.039800 systemd-networkd[1377]: cali708ad958e22: Link UP Nov 1 00:23:28.044432 systemd-networkd[1377]: cali708ad958e22: Gained carrier Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.937 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0 goldmane-666569f655- calico-system 590f4e1f-e213-4b72-aab5-d1ab9906213b 976 0 2025-11-01 00:23:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-159-149 goldmane-666569f655-lbk9p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali708ad958e22 [] [] }} ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.937 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.983 [INFO][4555] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" HandleID="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.983 [INFO][4555] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" HandleID="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf200), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-159-149", "pod":"goldmane-666569f655-lbk9p", "timestamp":"2025-11-01 00:23:27.983361375 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.985 [INFO][4555] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.985 [INFO][4555] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.985 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:27.995 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.001 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.007 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.010 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.013 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.013 [INFO][4555] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.015 [INFO][4555] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537 Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.020 [INFO][4555] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.026 [INFO][4555] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.68/26] block=192.168.65.64/26 handle="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.027 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.68/26] handle="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" host="172-237-159-149" Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.027 [INFO][4555] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:28.058597 containerd[1475]: 2025-11-01 00:23:28.027 [INFO][4555] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.68/26] IPv6=[] ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" HandleID="k8s-pod-network.1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:28.059135 containerd[1475]: 2025-11-01 00:23:28.031 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"590f4e1f-e213-4b72-aab5-d1ab9906213b", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"goldmane-666569f655-lbk9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali708ad958e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:28.059135 containerd[1475]: 2025-11-01 00:23:28.032 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.68/32] ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:28.059135 containerd[1475]: 2025-11-01 00:23:28.032 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali708ad958e22 ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:28.059135 containerd[1475]: 2025-11-01 00:23:28.040 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:28.059135 containerd[1475]: 2025-11-01 00:23:28.040 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"590f4e1f-e213-4b72-aab5-d1ab9906213b", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537", Pod:"goldmane-666569f655-lbk9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali708ad958e22", MAC:"d2:15:96:f7:14:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:28.059135 containerd[1475]: 2025-11-01 00:23:28.051 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537" Namespace="calico-system" Pod="goldmane-666569f655-lbk9p" WorkloadEndpoint="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:28.093001 containerd[1475]: time="2025-11-01T00:23:28.091711355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:28.093001 containerd[1475]: time="2025-11-01T00:23:28.091773815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:28.093001 containerd[1475]: time="2025-11-01T00:23:28.091788616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:28.093001 containerd[1475]: time="2025-11-01T00:23:28.092629291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:28.140157 systemd[1]: Started cri-containerd-1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537.scope - libcontainer container 1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537. Nov 1 00:23:28.189157 systemd-networkd[1377]: cali6c4507ab101: Link UP Nov 1 00:23:28.190904 systemd-networkd[1377]: cali6c4507ab101: Gained carrier Nov 1 00:23:28.214282 containerd[1475]: time="2025-11-01T00:23:28.213437887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lbk9p,Uid:590f4e1f-e213-4b72-aab5-d1ab9906213b,Namespace:calico-system,Attempt:1,} returns sandbox id \"1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537\"" Nov 1 00:23:28.219664 containerd[1475]: time="2025-11-01T00:23:28.219563919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:27.946 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0 calico-apiserver-57cb94b8fc- calico-apiserver 629a8271-4389-4e02-9056-efb21f586504 975 0 2025-11-01 00:23:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57cb94b8fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-159-149 calico-apiserver-57cb94b8fc-kxpw2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6c4507ab101 [] [] }} ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:27.946 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:27.990 [INFO][4560] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" HandleID="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:27.991 [INFO][4560] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" HandleID="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d54e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-159-149", "pod":"calico-apiserver-57cb94b8fc-kxpw2", "timestamp":"2025-11-01 00:23:27.990360025 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:27.991 [INFO][4560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.027 [INFO][4560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.027 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.098 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.114 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.123 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.127 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.133 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.133 [INFO][4560] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.153 [INFO][4560] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.168 [INFO][4560] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.176 [INFO][4560] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.69/26] block=192.168.65.64/26 handle="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.177 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.69/26] handle="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" host="172-237-159-149" Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.177 [INFO][4560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:28.225754 containerd[1475]: 2025-11-01 00:23:28.178 [INFO][4560] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.69/26] IPv6=[] ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" HandleID="k8s-pod-network.8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:28.226279 containerd[1475]: 2025-11-01 00:23:28.183 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0", GenerateName:"calico-apiserver-57cb94b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"629a8271-4389-4e02-9056-efb21f586504", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57cb94b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"calico-apiserver-57cb94b8fc-kxpw2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c4507ab101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:28.226279 containerd[1475]: 2025-11-01 00:23:28.184 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.69/32] ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:28.226279 containerd[1475]: 2025-11-01 00:23:28.184 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c4507ab101 ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:28.226279 containerd[1475]: 2025-11-01 00:23:28.191 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:28.226279 containerd[1475]: 2025-11-01 00:23:28.193 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0", GenerateName:"calico-apiserver-57cb94b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"629a8271-4389-4e02-9056-efb21f586504", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57cb94b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e", Pod:"calico-apiserver-57cb94b8fc-kxpw2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c4507ab101", MAC:"82:fe:8f:ba:0f:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:28.226279 containerd[1475]: 2025-11-01 00:23:28.214 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e" Namespace="calico-apiserver" Pod="calico-apiserver-57cb94b8fc-kxpw2" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:28.256205 containerd[1475]: time="2025-11-01T00:23:28.256115786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:28.258098 containerd[1475]: time="2025-11-01T00:23:28.258033869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:28.258267 containerd[1475]: time="2025-11-01T00:23:28.258206280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:28.258901 containerd[1475]: time="2025-11-01T00:23:28.258666973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:28.291447 systemd[1]: Started cri-containerd-8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e.scope - libcontainer container 8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e. Nov 1 00:23:28.350881 containerd[1475]: time="2025-11-01T00:23:28.350853286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57cb94b8fc-kxpw2,Uid:629a8271-4389-4e02-9056-efb21f586504,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e\"" Nov 1 00:23:28.358866 containerd[1475]: time="2025-11-01T00:23:28.358722409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:28.359614 containerd[1475]: time="2025-11-01T00:23:28.359587315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:28.359813 containerd[1475]: time="2025-11-01T00:23:28.359678756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:28.360013 kubelet[2543]: E1101 00:23:28.359961 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:28.360076 kubelet[2543]: E1101 00:23:28.360030 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:28.360629 containerd[1475]: time="2025-11-01T00:23:28.360577612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:28.361777 kubelet[2543]: E1101 00:23:28.360873 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sntd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lbk9p_calico-system(590f4e1f-e213-4b72-aab5-d1ab9906213b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:28.362705 kubelet[2543]: E1101 00:23:28.361978 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:23:28.483824 systemd-networkd[1377]: calib581a13117f: Gained IPv6LL Nov 1 00:23:28.517933 containerd[1475]: time="2025-11-01T00:23:28.517720464Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:28.519618 containerd[1475]: time="2025-11-01T00:23:28.519436385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:28.519618 containerd[1475]: time="2025-11-01T00:23:28.519536736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:28.520032 kubelet[2543]: E1101 00:23:28.519991 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:28.520085 kubelet[2543]: E1101 00:23:28.520049 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:28.520226 kubelet[2543]: E1101 00:23:28.520186 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-px4c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57cb94b8fc-kxpw2_calico-apiserver(629a8271-4389-4e02-9056-efb21f586504): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:28.521693 kubelet[2543]: E1101 00:23:28.521632 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:23:28.562089 containerd[1475]: time="2025-11-01T00:23:28.561967062Z" level=info msg="StopPodSandbox for \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\"" Nov 1 00:23:28.563522 containerd[1475]: time="2025-11-01T00:23:28.562228394Z" level=info msg="StopPodSandbox for \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\"" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.624 [INFO][4683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.625 [INFO][4683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" iface="eth0" netns="/var/run/netns/cni-3b192e8a-9f6d-b7c9-54fe-6fdfb1c76be3" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.626 [INFO][4683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" iface="eth0" netns="/var/run/netns/cni-3b192e8a-9f6d-b7c9-54fe-6fdfb1c76be3" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.627 [INFO][4683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" iface="eth0" netns="/var/run/netns/cni-3b192e8a-9f6d-b7c9-54fe-6fdfb1c76be3" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.627 [INFO][4683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.627 [INFO][4683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.679 [INFO][4700] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.679 [INFO][4700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.679 [INFO][4700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.686 [WARNING][4700] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.686 [INFO][4700] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.687 [INFO][4700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:28.695005 containerd[1475]: 2025-11-01 00:23:28.691 [INFO][4683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:28.697342 containerd[1475]: time="2025-11-01T00:23:28.695220693Z" level=info msg="TearDown network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\" successfully" Nov 1 00:23:28.697342 containerd[1475]: time="2025-11-01T00:23:28.695285783Z" level=info msg="StopPodSandbox for \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\" returns successfully" Nov 1 00:23:28.697342 containerd[1475]: time="2025-11-01T00:23:28.696747013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq8rb,Uid:780eceec-d826-43fc-b38c-894af01c17df,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:28.697691 kubelet[2543]: E1101 00:23:28.695943 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.629 [INFO][4690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.630 [INFO][4690] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" iface="eth0" netns="/var/run/netns/cni-d953e292-62ca-093e-4041-0619d2db75c6" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.632 [INFO][4690] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" iface="eth0" netns="/var/run/netns/cni-d953e292-62ca-093e-4041-0619d2db75c6" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.633 [INFO][4690] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" iface="eth0" netns="/var/run/netns/cni-d953e292-62ca-093e-4041-0619d2db75c6" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.633 [INFO][4690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.633 [INFO][4690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.682 [INFO][4704] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.682 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.687 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.694 [WARNING][4704] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.694 [INFO][4704] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.697 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:28.704666 containerd[1475]: 2025-11-01 00:23:28.701 [INFO][4690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:28.705149 containerd[1475]: time="2025-11-01T00:23:28.704998129Z" level=info msg="TearDown network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\" successfully" Nov 1 00:23:28.705149 containerd[1475]: time="2025-11-01T00:23:28.705030579Z" level=info msg="StopPodSandbox for \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\" returns successfully" Nov 1 00:23:28.705807 containerd[1475]: time="2025-11-01T00:23:28.705668933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-vd6vw,Uid:2f5e2ac6-875f-4179-9d8d-01e4d536c5f3,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:28.730562 systemd[1]: run-netns-cni\x2d3b192e8a\x2d9f6d\x2db7c9\x2d54fe\x2d6fdfb1c76be3.mount: Deactivated successfully. Nov 1 00:23:28.730689 systemd[1]: run-netns-cni\x2dd953e292\x2d62ca\x2d093e\x2d4041\x2d0619d2db75c6.mount: Deactivated successfully. Nov 1 00:23:28.844094 kubelet[2543]: E1101 00:23:28.843778 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:23:28.857648 kubelet[2543]: E1101 00:23:28.856803 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:28.857648 kubelet[2543]: E1101 00:23:28.857609 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:23:28.859508 kubelet[2543]: E1101 00:23:28.857752 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:23:28.888376 systemd-networkd[1377]: calibf594afb50f: Link UP Nov 1 00:23:28.891617 systemd-networkd[1377]: calibf594afb50f: Gained carrier Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.780 [INFO][4713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0 coredns-668d6bf9bc- kube-system 780eceec-d826-43fc-b38c-894af01c17df 1003 0 2025-11-01 00:22:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-159-149 coredns-668d6bf9bc-vq8rb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf594afb50f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.781 [INFO][4713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.812 [INFO][4738] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" HandleID="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.812 [INFO][4738] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" HandleID="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-159-149", "pod":"coredns-668d6bf9bc-vq8rb", "timestamp":"2025-11-01 00:23:28.812052912 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.812 [INFO][4738] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.812 [INFO][4738] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.812 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.820 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.827 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.834 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.836 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.840 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.840 [INFO][4738] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.845 [INFO][4738] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2 Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.861 [INFO][4738] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.872 [INFO][4738] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.70/26] block=192.168.65.64/26 handle="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.872 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.70/26] handle="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" host="172-237-159-149" Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.872 [INFO][4738] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:28.949201 containerd[1475]: 2025-11-01 00:23:28.872 [INFO][4738] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.70/26] IPv6=[] ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" HandleID="k8s-pod-network.b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.951010 containerd[1475]: 2025-11-01 00:23:28.880 [INFO][4713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"780eceec-d826-43fc-b38c-894af01c17df", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"coredns-668d6bf9bc-vq8rb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf594afb50f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:28.951010 containerd[1475]: 2025-11-01 00:23:28.881 [INFO][4713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.70/32] ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.951010 containerd[1475]: 2025-11-01 00:23:28.882 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf594afb50f ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.951010 containerd[1475]: 2025-11-01 00:23:28.899 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:28.951010 containerd[1475]: 2025-11-01 00:23:28.900 [INFO][4713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"780eceec-d826-43fc-b38c-894af01c17df", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2", Pod:"coredns-668d6bf9bc-vq8rb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf594afb50f", MAC:"fa:fe:76:d7:fc:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:28.951010 containerd[1475]: 2025-11-01 00:23:28.946 [INFO][4713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq8rb" WorkloadEndpoint="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:29.001612 containerd[1475]: time="2025-11-01T00:23:28.999758410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:29.001612 containerd[1475]: time="2025-11-01T00:23:28.999818421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:29.001612 containerd[1475]: time="2025-11-01T00:23:28.999829441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:29.001612 containerd[1475]: time="2025-11-01T00:23:29.000037512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:29.040068 systemd[1]: run-containerd-runc-k8s.io-b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2-runc.Y4L0jr.mount: Deactivated successfully. Nov 1 00:23:29.049654 systemd[1]: Started cri-containerd-b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2.scope - libcontainer container b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2. Nov 1 00:23:29.064770 systemd-networkd[1377]: calic2adddf9e36: Link UP Nov 1 00:23:29.067899 systemd-networkd[1377]: calic2adddf9e36: Gained carrier Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.784 [INFO][4724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0 calico-apiserver-6dd9845dcf- calico-apiserver 2f5e2ac6-875f-4179-9d8d-01e4d536c5f3 1004 0 2025-11-01 00:23:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dd9845dcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-159-149 calico-apiserver-6dd9845dcf-vd6vw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2adddf9e36 [] [] }} ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.784 [INFO][4724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.814 [INFO][4743] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" HandleID="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.815 [INFO][4743] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" HandleID="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-159-149", "pod":"calico-apiserver-6dd9845dcf-vd6vw", "timestamp":"2025-11-01 00:23:28.814384358 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.816 [INFO][4743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.872 [INFO][4743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.872 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.940 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.973 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:28.998 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.005 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.013 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.013 [INFO][4743] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.017 [INFO][4743] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4 Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.038 [INFO][4743] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.048 [INFO][4743] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.71/26] block=192.168.65.64/26 handle="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.048 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.71/26] handle="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" host="172-237-159-149" Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.048 [INFO][4743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:29.119341 containerd[1475]: 2025-11-01 00:23:29.048 [INFO][4743] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.71/26] IPv6=[] ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" HandleID="k8s-pod-network.e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:29.120100 containerd[1475]: 2025-11-01 00:23:29.055 [INFO][4724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"calico-apiserver-6dd9845dcf-vd6vw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2adddf9e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:29.120100 containerd[1475]: 2025-11-01 00:23:29.055 [INFO][4724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.71/32] ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:29.120100 containerd[1475]: 2025-11-01 00:23:29.055 [INFO][4724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2adddf9e36 ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:29.120100 containerd[1475]: 2025-11-01 00:23:29.070 [INFO][4724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:29.120100 containerd[1475]: 2025-11-01 00:23:29.073 [INFO][4724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4", Pod:"calico-apiserver-6dd9845dcf-vd6vw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2adddf9e36", MAC:"82:bd:b9:9f:71:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:29.120100 containerd[1475]: 2025-11-01 00:23:29.113 [INFO][4724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-vd6vw" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:29.185649 containerd[1475]: time="2025-11-01T00:23:29.184979405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq8rb,Uid:780eceec-d826-43fc-b38c-894af01c17df,Namespace:kube-system,Attempt:1,} returns sandbox id \"b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2\"" Nov 1 00:23:29.187562 kubelet[2543]: E1101 00:23:29.186732 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:29.193324 containerd[1475]: time="2025-11-01T00:23:29.193273407Z" level=info msg="CreateContainer within sandbox \"b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:29.196596 containerd[1475]: time="2025-11-01T00:23:29.193979572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:29.196596 containerd[1475]: time="2025-11-01T00:23:29.194104502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:29.196596 containerd[1475]: time="2025-11-01T00:23:29.194118723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:29.196596 containerd[1475]: time="2025-11-01T00:23:29.194225863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:29.221069 containerd[1475]: time="2025-11-01T00:23:29.220438469Z" level=info msg="CreateContainer within sandbox \"b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"869258bfe412b42cdfe4adc9741aceb14c3ac42b05f60af6de0c983a20f5917d\"" Nov 1 00:23:29.226653 containerd[1475]: time="2025-11-01T00:23:29.224551375Z" level=info msg="StartContainer for \"869258bfe412b42cdfe4adc9741aceb14c3ac42b05f60af6de0c983a20f5917d\"" Nov 1 00:23:29.235826 systemd[1]: Started cri-containerd-e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4.scope - libcontainer container e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4. Nov 1 00:23:29.271699 systemd[1]: Started cri-containerd-869258bfe412b42cdfe4adc9741aceb14c3ac42b05f60af6de0c983a20f5917d.scope - libcontainer container 869258bfe412b42cdfe4adc9741aceb14c3ac42b05f60af6de0c983a20f5917d. Nov 1 00:23:29.309157 containerd[1475]: time="2025-11-01T00:23:29.308934330Z" level=info msg="StartContainer for \"869258bfe412b42cdfe4adc9741aceb14c3ac42b05f60af6de0c983a20f5917d\" returns successfully" Nov 1 00:23:29.362463 containerd[1475]: time="2025-11-01T00:23:29.361960836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-vd6vw,Uid:2f5e2ac6-875f-4179-9d8d-01e4d536c5f3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4\"" Nov 1 00:23:29.364903 containerd[1475]: time="2025-11-01T00:23:29.364660583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:29.379815 systemd-networkd[1377]: cali6c4507ab101: Gained IPv6LL Nov 1 00:23:29.508343 containerd[1475]: time="2025-11-01T00:23:29.508264523Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:29.509746 containerd[1475]: time="2025-11-01T00:23:29.509668452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:29.510892 containerd[1475]: time="2025-11-01T00:23:29.509819283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:29.510940 kubelet[2543]: E1101 00:23:29.510037 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:29.510940 kubelet[2543]: E1101 00:23:29.510095 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:29.510940 kubelet[2543]: E1101 00:23:29.510289 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-vd6vw_calico-apiserver(2f5e2ac6-875f-4179-9d8d-01e4d536c5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:29.511772 kubelet[2543]: E1101 00:23:29.511714 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:23:29.565157 containerd[1475]: time="2025-11-01T00:23:29.564988892Z" level=info msg="StopPodSandbox for \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\"" Nov 1 00:23:29.567891 containerd[1475]: time="2025-11-01T00:23:29.567764820Z" level=info msg="StopPodSandbox for \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\"" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.668 [INFO][4912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.669 [INFO][4912] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" iface="eth0" netns="/var/run/netns/cni-7edc517b-6265-ffcb-8464-648988aab397" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.669 [INFO][4912] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" iface="eth0" netns="/var/run/netns/cni-7edc517b-6265-ffcb-8464-648988aab397" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.674 [INFO][4912] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" iface="eth0" netns="/var/run/netns/cni-7edc517b-6265-ffcb-8464-648988aab397" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.674 [INFO][4912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.674 [INFO][4912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.730 [INFO][4928] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.731 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.731 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.738 [WARNING][4928] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.738 [INFO][4928] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.741 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:29.748728 containerd[1475]: 2025-11-01 00:23:29.745 [INFO][4912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:29.749790 containerd[1475]: time="2025-11-01T00:23:29.749622572Z" level=info msg="TearDown network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\" successfully" Nov 1 00:23:29.749790 containerd[1475]: time="2025-11-01T00:23:29.749648422Z" level=info msg="StopPodSandbox for \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\" returns successfully" Nov 1 00:23:29.751062 containerd[1475]: time="2025-11-01T00:23:29.751014201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nw6x5,Uid:42a33fba-271a-4a52-bba9-06d9d0613c0c,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:29.756167 systemd[1]: run-netns-cni\x2d7edc517b\x2d6265\x2dffcb\x2d8464\x2d648988aab397.mount: Deactivated successfully. Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.665 [INFO][4913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.666 [INFO][4913] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" iface="eth0" netns="/var/run/netns/cni-ac5c0735-9df9-c14f-9676-8a2a2fa32aea" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.674 [INFO][4913] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" iface="eth0" netns="/var/run/netns/cni-ac5c0735-9df9-c14f-9676-8a2a2fa32aea" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.674 [INFO][4913] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" iface="eth0" netns="/var/run/netns/cni-ac5c0735-9df9-c14f-9676-8a2a2fa32aea" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.674 [INFO][4913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.674 [INFO][4913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.732 [INFO][4929] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.734 [INFO][4929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.740 [INFO][4929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.764 [WARNING][4929] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.764 [INFO][4929] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.772 [INFO][4929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:29.793945 containerd[1475]: 2025-11-01 00:23:29.788 [INFO][4913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:29.797602 containerd[1475]: time="2025-11-01T00:23:29.796624019Z" level=info msg="TearDown network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\" successfully" Nov 1 00:23:29.797602 containerd[1475]: time="2025-11-01T00:23:29.796688000Z" level=info msg="StopPodSandbox for \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\" returns successfully" Nov 1 00:23:29.800552 containerd[1475]: time="2025-11-01T00:23:29.799403117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-xh7sj,Uid:d94db435-8568-49d2-8fbb-f0e2ac2a0138,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:29.802214 systemd[1]: run-netns-cni\x2dac5c0735\x2d9df9\x2dc14f\x2d9676\x2d8a2a2fa32aea.mount: Deactivated successfully. Nov 1 00:23:29.866914 kubelet[2543]: E1101 00:23:29.866779 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:29.875891 kubelet[2543]: E1101 00:23:29.875604 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:23:29.875891 kubelet[2543]: E1101 00:23:29.875705 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:23:29.878819 kubelet[2543]: E1101 00:23:29.878711 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:23:29.888510 kubelet[2543]: I1101 00:23:29.887185 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vq8rb" podStartSLOduration=37.887169193 podStartE2EDuration="37.887169193s" podCreationTimestamp="2025-11-01 00:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:29.885020949 +0000 UTC m=+42.437484875" watchObservedRunningTime="2025-11-01 00:23:29.887169193 +0000 UTC m=+42.439633099" Nov 1 00:23:29.892107 systemd-networkd[1377]: cali708ad958e22: Gained IPv6LL Nov 1 00:23:30.065904 systemd-networkd[1377]: calif0d7aae64c4: Link UP Nov 1 00:23:30.067797 systemd-networkd[1377]: calif0d7aae64c4: Gained carrier Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:29.878 [INFO][4942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-csi--node--driver--nw6x5-eth0 csi-node-driver- calico-system 42a33fba-271a-4a52-bba9-06d9d0613c0c 1043 0 2025-11-01 00:23:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-159-149 csi-node-driver-nw6x5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif0d7aae64c4 [] [] }} ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:29.880 [INFO][4942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:29.994 [INFO][4966] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" HandleID="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:29.995 [INFO][4966] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" HandleID="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004340b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-159-149", "pod":"csi-node-driver-nw6x5", "timestamp":"2025-11-01 00:23:29.994871695 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:29.996 [INFO][4966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:29.996 [INFO][4966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:29.997 [INFO][4966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.004 [INFO][4966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.014 [INFO][4966] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.026 [INFO][4966] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.030 [INFO][4966] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.033 [INFO][4966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.033 [INFO][4966] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.035 [INFO][4966] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271 Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.039 [INFO][4966] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.048 [INFO][4966] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.72/26] block=192.168.65.64/26 handle="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.048 [INFO][4966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.72/26] handle="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" host="172-237-159-149" Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.048 [INFO][4966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:30.090390 containerd[1475]: 2025-11-01 00:23:30.048 [INFO][4966] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.72/26] IPv6=[] ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" HandleID="k8s-pod-network.b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:30.091455 containerd[1475]: 2025-11-01 00:23:30.053 [INFO][4942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-csi--node--driver--nw6x5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42a33fba-271a-4a52-bba9-06d9d0613c0c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"csi-node-driver-nw6x5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0d7aae64c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:30.091455 containerd[1475]: 2025-11-01 00:23:30.053 [INFO][4942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.72/32] ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:30.091455 containerd[1475]: 2025-11-01 00:23:30.054 [INFO][4942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0d7aae64c4 ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:30.091455 containerd[1475]: 2025-11-01 00:23:30.066 [INFO][4942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:30.091455 containerd[1475]: 2025-11-01 00:23:30.066 [INFO][4942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-csi--node--driver--nw6x5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42a33fba-271a-4a52-bba9-06d9d0613c0c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271", Pod:"csi-node-driver-nw6x5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0d7aae64c4", MAC:"ea:3e:9c:9d:ae:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:30.091455 containerd[1475]: 2025-11-01 00:23:30.085 [INFO][4942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271" Namespace="calico-system" Pod="csi-node-driver-nw6x5" WorkloadEndpoint="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:30.135163 containerd[1475]: time="2025-11-01T00:23:30.133927474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:30.135163 containerd[1475]: time="2025-11-01T00:23:30.134027574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:30.135163 containerd[1475]: time="2025-11-01T00:23:30.134068414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:30.135163 containerd[1475]: time="2025-11-01T00:23:30.134205755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:30.180969 systemd[1]: Started cri-containerd-b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271.scope - libcontainer container b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271. Nov 1 00:23:30.208401 systemd-networkd[1377]: califb20d3092a8: Link UP Nov 1 00:23:30.212008 systemd-networkd[1377]: califb20d3092a8: Gained carrier Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:29.914 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0 calico-apiserver-6dd9845dcf- calico-apiserver d94db435-8568-49d2-8fbb-f0e2ac2a0138 1042 0 2025-11-01 00:23:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dd9845dcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-159-149 calico-apiserver-6dd9845dcf-xh7sj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb20d3092a8 [] [] }} ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:29.915 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.017 [INFO][4971] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" HandleID="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.022 [INFO][4971] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" HandleID="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bcfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-159-149", "pod":"calico-apiserver-6dd9845dcf-xh7sj", "timestamp":"2025-11-01 00:23:30.017336911 +0000 UTC"}, Hostname:"172-237-159-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.022 [INFO][4971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.051 [INFO][4971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.051 [INFO][4971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-159-149' Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.104 [INFO][4971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.116 [INFO][4971] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.126 [INFO][4971] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.128 [INFO][4971] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.135 [INFO][4971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.136 [INFO][4971] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.141 [INFO][4971] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015 Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.154 [INFO][4971] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.186 [INFO][4971] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.73/26] block=192.168.65.64/26 handle="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.186 [INFO][4971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.73/26] handle="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" host="172-237-159-149" Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.186 [INFO][4971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:30.242970 containerd[1475]: 2025-11-01 00:23:30.187 [INFO][4971] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.73/26] IPv6=[] ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" HandleID="k8s-pod-network.192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:30.244047 containerd[1475]: 2025-11-01 00:23:30.195 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94db435-8568-49d2-8fbb-f0e2ac2a0138", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"", Pod:"calico-apiserver-6dd9845dcf-xh7sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb20d3092a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:30.244047 containerd[1475]: 2025-11-01 00:23:30.195 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.73/32] ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:30.244047 containerd[1475]: 2025-11-01 00:23:30.195 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb20d3092a8 ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:30.244047 containerd[1475]: 2025-11-01 00:23:30.214 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:30.244047 containerd[1475]: 2025-11-01 00:23:30.216 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94db435-8568-49d2-8fbb-f0e2ac2a0138", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015", Pod:"calico-apiserver-6dd9845dcf-xh7sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb20d3092a8", MAC:"a6:23:cc:58:d7:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:30.244047 containerd[1475]: 2025-11-01 00:23:30.237 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015" Namespace="calico-apiserver" Pod="calico-apiserver-6dd9845dcf-xh7sj" WorkloadEndpoint="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:30.276218 systemd-networkd[1377]: calibf594afb50f: Gained IPv6LL Nov 1 00:23:30.289194 containerd[1475]: time="2025-11-01T00:23:30.288921004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:30.289194 containerd[1475]: time="2025-11-01T00:23:30.289100495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:30.289194 containerd[1475]: time="2025-11-01T00:23:30.289158706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:30.289624 containerd[1475]: time="2025-11-01T00:23:30.289597708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:30.309704 containerd[1475]: time="2025-11-01T00:23:30.309628727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nw6x5,Uid:42a33fba-271a-4a52-bba9-06d9d0613c0c,Namespace:calico-system,Attempt:1,} returns sandbox id \"b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271\"" Nov 1 00:23:30.316282 containerd[1475]: time="2025-11-01T00:23:30.316099616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:30.328320 systemd[1]: Started cri-containerd-192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015.scope - libcontainer container 192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015. Nov 1 00:23:30.409788 containerd[1475]: time="2025-11-01T00:23:30.409716612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dd9845dcf-xh7sj,Uid:d94db435-8568-49d2-8fbb-f0e2ac2a0138,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015\"" Nov 1 00:23:30.451030 containerd[1475]: time="2025-11-01T00:23:30.450976067Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:30.452162 containerd[1475]: time="2025-11-01T00:23:30.452076863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:30.452162 containerd[1475]: time="2025-11-01T00:23:30.452114543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:30.452273 kubelet[2543]: E1101 00:23:30.452236 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:30.452321 kubelet[2543]: E1101 00:23:30.452286 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:30.452858 kubelet[2543]: E1101 00:23:30.452797 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:30.453395 containerd[1475]: time="2025-11-01T00:23:30.453334361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:30.596660 systemd-networkd[1377]: calic2adddf9e36: Gained IPv6LL Nov 1 00:23:30.603290 containerd[1475]: time="2025-11-01T00:23:30.603220820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:30.604321 containerd[1475]: time="2025-11-01T00:23:30.604286437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:30.604524 containerd[1475]: time="2025-11-01T00:23:30.604360957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:30.604574 kubelet[2543]: E1101 00:23:30.604539 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:30.604627 kubelet[2543]: E1101 00:23:30.604587 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:30.604839 kubelet[2543]: E1101 00:23:30.604783 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-485nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-xh7sj_calico-apiserver(d94db435-8568-49d2-8fbb-f0e2ac2a0138): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:30.605738 containerd[1475]: time="2025-11-01T00:23:30.605700905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:30.606070 kubelet[2543]: E1101 00:23:30.606013 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:23:30.739844 containerd[1475]: time="2025-11-01T00:23:30.739757601Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:30.740818 containerd[1475]: time="2025-11-01T00:23:30.740773177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:30.740975 containerd[1475]: time="2025-11-01T00:23:30.740797807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:30.741187 kubelet[2543]: E1101 00:23:30.741103 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:30.741250 kubelet[2543]: E1101 00:23:30.741198 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:30.741452 kubelet[2543]: E1101 00:23:30.741377 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:30.742977 kubelet[2543]: E1101 00:23:30.742860 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:30.880940 kubelet[2543]: E1101 00:23:30.880736 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:23:30.888799 kubelet[2543]: E1101 00:23:30.887692 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:23:30.889047 kubelet[2543]: E1101 00:23:30.889010 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:23:30.890971 kubelet[2543]: E1101 00:23:30.890894 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:31.172834 systemd-networkd[1377]: calif0d7aae64c4: Gained IPv6LL Nov 1 00:23:31.363708 systemd-networkd[1377]: califb20d3092a8: Gained IPv6LL Nov 1 00:23:31.564046 containerd[1475]: time="2025-11-01T00:23:31.563225553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:31.708091 containerd[1475]: time="2025-11-01T00:23:31.707962229Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:31.709812 containerd[1475]: time="2025-11-01T00:23:31.709642928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:31.709812 containerd[1475]: time="2025-11-01T00:23:31.709710789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:31.709977 kubelet[2543]: E1101 00:23:31.709919 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:31.710051 kubelet[2543]: E1101 00:23:31.709990 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:31.711172 kubelet[2543]: E1101 00:23:31.710878 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4826801892e941d495724508c51c8278,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:31.713974 containerd[1475]: time="2025-11-01T00:23:31.713594420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:31.845958 containerd[1475]: time="2025-11-01T00:23:31.845156283Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:31.847071 containerd[1475]: time="2025-11-01T00:23:31.846935483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:31.847071 containerd[1475]: time="2025-11-01T00:23:31.847011723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:31.847290 kubelet[2543]: E1101 00:23:31.847218 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:31.847290 kubelet[2543]: E1101 00:23:31.847282 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:31.848262 kubelet[2543]: E1101 00:23:31.847424 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:31.849652 kubelet[2543]: E1101 00:23:31.849591 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:23:31.890647 kubelet[2543]: E1101 00:23:31.890604 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:23:31.891151 kubelet[2543]: E1101 00:23:31.891104 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:41.584241 containerd[1475]: time="2025-11-01T00:23:41.583781280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:41.718526 containerd[1475]: time="2025-11-01T00:23:41.718102992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:41.720363 containerd[1475]: time="2025-11-01T00:23:41.719232345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:41.720601 containerd[1475]: time="2025-11-01T00:23:41.720459559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:41.720918 kubelet[2543]: E1101 00:23:41.720863 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:41.721342 kubelet[2543]: E1101 00:23:41.720931 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:41.721997 containerd[1475]: time="2025-11-01T00:23:41.721794003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:41.722940 kubelet[2543]: E1101 00:23:41.722889 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sntd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lbk9p_calico-system(590f4e1f-e213-4b72-aab5-d1ab9906213b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:41.724338 kubelet[2543]: E1101 00:23:41.724093 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:23:41.861879 containerd[1475]: time="2025-11-01T00:23:41.861617001Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:41.862894 containerd[1475]: time="2025-11-01T00:23:41.862742505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:41.862894 containerd[1475]: time="2025-11-01T00:23:41.862836775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:41.865117 kubelet[2543]: E1101 00:23:41.865067 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:41.865182 kubelet[2543]: E1101 00:23:41.865135 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:41.865336 kubelet[2543]: E1101 00:23:41.865286 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-vd6vw_calico-apiserver(2f5e2ac6-875f-4179-9d8d-01e4d536c5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:41.867104 kubelet[2543]: E1101 00:23:41.867032 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:23:42.564412 containerd[1475]: time="2025-11-01T00:23:42.564242560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:42.704452 containerd[1475]: time="2025-11-01T00:23:42.704385154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:42.705886 containerd[1475]: time="2025-11-01T00:23:42.705812678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:42.706039 containerd[1475]: time="2025-11-01T00:23:42.705897558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:42.706127 kubelet[2543]: E1101 00:23:42.706087 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:42.706215 kubelet[2543]: E1101 00:23:42.706144 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:42.706387 kubelet[2543]: E1101 00:23:42.706299 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llcsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f84c65659-5v5f2_calico-system(2e77087b-330c-4d1c-8e6e-77f7214641fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:42.708126 kubelet[2543]: E1101 00:23:42.708081 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:23:43.568596 containerd[1475]: time="2025-11-01T00:23:43.568210362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:43.699601 containerd[1475]: time="2025-11-01T00:23:43.699548149Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:43.700784 containerd[1475]: time="2025-11-01T00:23:43.700743772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:43.700958 containerd[1475]: time="2025-11-01T00:23:43.700799712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:43.702473 kubelet[2543]: E1101 00:23:43.701204 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:43.702473 kubelet[2543]: E1101 00:23:43.701257 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:43.702473 kubelet[2543]: E1101 00:23:43.701399 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-px4c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57cb94b8fc-kxpw2_calico-apiserver(629a8271-4389-4e02-9056-efb21f586504): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:43.703043 kubelet[2543]: E1101 00:23:43.703014 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:23:44.570539 containerd[1475]: time="2025-11-01T00:23:44.570040462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:44.571703 kubelet[2543]: E1101 00:23:44.571189 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:23:44.706592 containerd[1475]: time="2025-11-01T00:23:44.706383620Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:44.707699 containerd[1475]: time="2025-11-01T00:23:44.707657654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:44.707775 containerd[1475]: time="2025-11-01T00:23:44.707739794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:44.708818 kubelet[2543]: E1101 00:23:44.708770 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:44.709534 kubelet[2543]: E1101 00:23:44.709221 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:44.709534 kubelet[2543]: E1101 00:23:44.709424 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-485nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-xh7sj_calico-apiserver(d94db435-8568-49d2-8fbb-f0e2ac2a0138): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:44.710703 kubelet[2543]: E1101 00:23:44.710648 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:23:45.569499 containerd[1475]: time="2025-11-01T00:23:45.567736338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:45.716509 containerd[1475]: time="2025-11-01T00:23:45.716425323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:45.718424 containerd[1475]: time="2025-11-01T00:23:45.718322177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:45.718613 containerd[1475]: time="2025-11-01T00:23:45.718384247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:45.719190 kubelet[2543]: E1101 00:23:45.719099 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:45.719190 kubelet[2543]: E1101 00:23:45.719179 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:45.719911 kubelet[2543]: E1101 00:23:45.719327 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:45.723237 containerd[1475]: time="2025-11-01T00:23:45.722918327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:45.868966 containerd[1475]: time="2025-11-01T00:23:45.868821017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:45.869991 containerd[1475]: time="2025-11-01T00:23:45.869854999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:45.869991 containerd[1475]: time="2025-11-01T00:23:45.869879759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:45.870177 kubelet[2543]: E1101 00:23:45.870133 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:45.870264 kubelet[2543]: E1101 00:23:45.870193 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:45.870368 kubelet[2543]: E1101 00:23:45.870324 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:45.871881 kubelet[2543]: E1101 00:23:45.871816 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:47.590648 containerd[1475]: time="2025-11-01T00:23:47.590590168Z" level=info msg="StopPodSandbox for \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\"" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.649 [WARNING][5123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"590f4e1f-e213-4b72-aab5-d1ab9906213b", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537", Pod:"goldmane-666569f655-lbk9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali708ad958e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.650 [INFO][5123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.650 [INFO][5123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" iface="eth0" netns="" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.650 [INFO][5123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.650 [INFO][5123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.679 [INFO][5130] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.679 [INFO][5130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.680 [INFO][5130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.685 [WARNING][5130] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.685 [INFO][5130] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.687 [INFO][5130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:47.692425 containerd[1475]: 2025-11-01 00:23:47.689 [INFO][5123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.692425 containerd[1475]: time="2025-11-01T00:23:47.692228749Z" level=info msg="TearDown network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\" successfully" Nov 1 00:23:47.692425 containerd[1475]: time="2025-11-01T00:23:47.692258319Z" level=info msg="StopPodSandbox for \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\" returns successfully" Nov 1 00:23:47.693673 containerd[1475]: time="2025-11-01T00:23:47.692984061Z" level=info msg="RemovePodSandbox for \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\"" Nov 1 00:23:47.693673 containerd[1475]: time="2025-11-01T00:23:47.693013741Z" level=info msg="Forcibly stopping sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\"" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.740 [WARNING][5145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"590f4e1f-e213-4b72-aab5-d1ab9906213b", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"1bb33b565bb64211cbf0ce9cb5a13672d24a66328b2ceccb35b26b0a5fe62537", Pod:"goldmane-666569f655-lbk9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali708ad958e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.740 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.740 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" iface="eth0" netns="" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.740 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.740 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.773 [INFO][5152] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.774 [INFO][5152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.774 [INFO][5152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.782 [WARNING][5152] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.782 [INFO][5152] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" HandleID="k8s-pod-network.51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Workload="172--237--159--149-k8s-goldmane--666569f655--lbk9p-eth0" Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.784 [INFO][5152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:47.791693 containerd[1475]: 2025-11-01 00:23:47.787 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714" Nov 1 00:23:47.792198 containerd[1475]: time="2025-11-01T00:23:47.791713886Z" level=info msg="TearDown network for sandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\" successfully" Nov 1 00:23:47.796289 containerd[1475]: time="2025-11-01T00:23:47.796173885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:47.796452 containerd[1475]: time="2025-11-01T00:23:47.796291716Z" level=info msg="RemovePodSandbox \"51df3be0a85a680f262d8c4dc749c267366a7e33f3454e7eec80e13e470bb714\" returns successfully" Nov 1 00:23:47.798278 containerd[1475]: time="2025-11-01T00:23:47.798236939Z" level=info msg="StopPodSandbox for \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\"" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.839 [WARNING][5167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4", Pod:"calico-apiserver-6dd9845dcf-vd6vw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2adddf9e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.839 [INFO][5167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.839 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" iface="eth0" netns="" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.839 [INFO][5167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.839 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.871 [INFO][5175] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.871 [INFO][5175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.871 [INFO][5175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.879 [WARNING][5175] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.879 [INFO][5175] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.883 [INFO][5175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:47.893802 containerd[1475]: 2025-11-01 00:23:47.886 [INFO][5167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.893802 containerd[1475]: time="2025-11-01T00:23:47.894596091Z" level=info msg="TearDown network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\" successfully" Nov 1 00:23:47.893802 containerd[1475]: time="2025-11-01T00:23:47.894630821Z" level=info msg="StopPodSandbox for \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\" returns successfully" Nov 1 00:23:47.895896 containerd[1475]: time="2025-11-01T00:23:47.895540602Z" level=info msg="RemovePodSandbox for \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\"" Nov 1 00:23:47.895896 containerd[1475]: time="2025-11-01T00:23:47.895577293Z" level=info msg="Forcibly stopping sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\"" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.939 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5e2ac6-875f-4179-9d8d-01e4d536c5f3", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"e23dd263b8e101f0e20da75f7192c6a0820d30483eba6c4cfecad252111dd5f4", Pod:"calico-apiserver-6dd9845dcf-vd6vw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2adddf9e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.940 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.940 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" iface="eth0" netns="" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.940 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.940 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.969 [INFO][5197] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.969 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.969 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.976 [WARNING][5197] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.976 [INFO][5197] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" HandleID="k8s-pod-network.813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--vd6vw-eth0" Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.978 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:47.984085 containerd[1475]: 2025-11-01 00:23:47.980 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4" Nov 1 00:23:47.984627 containerd[1475]: time="2025-11-01T00:23:47.984119518Z" level=info msg="TearDown network for sandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\" successfully" Nov 1 00:23:47.989167 containerd[1475]: time="2025-11-01T00:23:47.988965528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:47.989167 containerd[1475]: time="2025-11-01T00:23:47.989034028Z" level=info msg="RemovePodSandbox \"813747ec5fd8d3623cab23be5c625fc711293a052fff440fd5779774409e7ba4\" returns successfully" Nov 1 00:23:47.989614 containerd[1475]: time="2025-11-01T00:23:47.989578679Z" level=info msg="StopPodSandbox for \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\"" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.030 [WARNING][5211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-csi--node--driver--nw6x5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42a33fba-271a-4a52-bba9-06d9d0613c0c", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271", Pod:"csi-node-driver-nw6x5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0d7aae64c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.030 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.030 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" iface="eth0" netns="" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.030 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.030 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.056 [INFO][5219] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.057 [INFO][5219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.057 [INFO][5219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.062 [WARNING][5219] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.062 [INFO][5219] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.064 [INFO][5219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.070411 containerd[1475]: 2025-11-01 00:23:48.067 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.070411 containerd[1475]: time="2025-11-01T00:23:48.070400941Z" level=info msg="TearDown network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\" successfully" Nov 1 00:23:48.070411 containerd[1475]: time="2025-11-01T00:23:48.070421521Z" level=info msg="StopPodSandbox for \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\" returns successfully" Nov 1 00:23:48.072614 containerd[1475]: time="2025-11-01T00:23:48.071661993Z" level=info msg="RemovePodSandbox for \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\"" Nov 1 00:23:48.072614 containerd[1475]: time="2025-11-01T00:23:48.071704313Z" level=info msg="Forcibly stopping sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\"" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.109 [WARNING][5233] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-csi--node--driver--nw6x5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42a33fba-271a-4a52-bba9-06d9d0613c0c", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"b63c502bb9d9a89538beb137fe7ff36341ac31f5d86bb9916f750ec828a38271", Pod:"csi-node-driver-nw6x5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0d7aae64c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.110 [INFO][5233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.110 [INFO][5233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" iface="eth0" netns="" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.110 [INFO][5233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.110 [INFO][5233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.137 [INFO][5241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.138 [INFO][5241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.138 [INFO][5241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.143 [WARNING][5241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.143 [INFO][5241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" HandleID="k8s-pod-network.bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Workload="172--237--159--149-k8s-csi--node--driver--nw6x5-eth0" Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.145 [INFO][5241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.152581 containerd[1475]: 2025-11-01 00:23:48.149 [INFO][5233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79" Nov 1 00:23:48.152581 containerd[1475]: time="2025-11-01T00:23:48.152462983Z" level=info msg="TearDown network for sandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\" successfully" Nov 1 00:23:48.160377 containerd[1475]: time="2025-11-01T00:23:48.160291618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:48.160451 containerd[1475]: time="2025-11-01T00:23:48.160428698Z" level=info msg="RemovePodSandbox \"bbf5a741d45e22a4d6896a4628c7e5e0ceddbcf213b76afd5415dda265eeea79\" returns successfully" Nov 1 00:23:48.161989 containerd[1475]: time="2025-11-01T00:23:48.161346240Z" level=info msg="StopPodSandbox for \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\"" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.214 [WARNING][5255] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0", GenerateName:"calico-kube-controllers-f84c65659-", Namespace:"calico-system", SelfLink:"", UID:"2e77087b-330c-4d1c-8e6e-77f7214641fd", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84c65659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652", Pod:"calico-kube-controllers-f84c65659-5v5f2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib581a13117f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.215 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.215 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" iface="eth0" netns="" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.215 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.215 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.256 [INFO][5262] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.257 [INFO][5262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.257 [INFO][5262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.265 [WARNING][5262] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.265 [INFO][5262] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.266 [INFO][5262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.272365 containerd[1475]: 2025-11-01 00:23:48.269 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.272365 containerd[1475]: time="2025-11-01T00:23:48.272363226Z" level=info msg="TearDown network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\" successfully" Nov 1 00:23:48.274841 containerd[1475]: time="2025-11-01T00:23:48.272393976Z" level=info msg="StopPodSandbox for \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\" returns successfully" Nov 1 00:23:48.275526 containerd[1475]: time="2025-11-01T00:23:48.275096181Z" level=info msg="RemovePodSandbox for \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\"" Nov 1 00:23:48.275526 containerd[1475]: time="2025-11-01T00:23:48.275162311Z" level=info msg="Forcibly stopping sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\"" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.312 [WARNING][5277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0", GenerateName:"calico-kube-controllers-f84c65659-", Namespace:"calico-system", SelfLink:"", UID:"2e77087b-330c-4d1c-8e6e-77f7214641fd", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84c65659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"a7a290670486b52f849847178a64e716608d539a46d362440ae0b0e25aa31652", Pod:"calico-kube-controllers-f84c65659-5v5f2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib581a13117f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.312 [INFO][5277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.312 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" iface="eth0" netns="" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.312 [INFO][5277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.312 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.353 [INFO][5285] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.353 [INFO][5285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.353 [INFO][5285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.359 [WARNING][5285] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.359 [INFO][5285] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" HandleID="k8s-pod-network.12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Workload="172--237--159--149-k8s-calico--kube--controllers--f84c65659--5v5f2-eth0" Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.362 [INFO][5285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.368657 containerd[1475]: 2025-11-01 00:23:48.365 [INFO][5277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5" Nov 1 00:23:48.370360 containerd[1475]: time="2025-11-01T00:23:48.369336236Z" level=info msg="TearDown network for sandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\" successfully" Nov 1 00:23:48.374097 containerd[1475]: time="2025-11-01T00:23:48.374040955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:48.374281 containerd[1475]: time="2025-11-01T00:23:48.374242215Z" level=info msg="RemovePodSandbox \"12a9ddcdcc8392e5a54ef4cd8002ddb3572ddba7a303b5cb6db95ef7f8f81ca5\" returns successfully" Nov 1 00:23:48.375310 containerd[1475]: time="2025-11-01T00:23:48.375288907Z" level=info msg="StopPodSandbox for \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\"" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.430 [WARNING][5300] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"780eceec-d826-43fc-b38c-894af01c17df", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2", Pod:"coredns-668d6bf9bc-vq8rb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf594afb50f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.430 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.430 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" iface="eth0" netns="" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.430 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.430 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.459 [INFO][5307] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.459 [INFO][5307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.459 [INFO][5307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.469 [WARNING][5307] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.469 [INFO][5307] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.471 [INFO][5307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.483762 containerd[1475]: 2025-11-01 00:23:48.480 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.485077 containerd[1475]: time="2025-11-01T00:23:48.484334600Z" level=info msg="TearDown network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\" successfully" Nov 1 00:23:48.485077 containerd[1475]: time="2025-11-01T00:23:48.484362060Z" level=info msg="StopPodSandbox for \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\" returns successfully" Nov 1 00:23:48.486810 containerd[1475]: time="2025-11-01T00:23:48.486744084Z" level=info msg="RemovePodSandbox for \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\"" Nov 1 00:23:48.486957 containerd[1475]: time="2025-11-01T00:23:48.486913725Z" level=info msg="Forcibly stopping sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\"" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.544 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"780eceec-d826-43fc-b38c-894af01c17df", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"b4877e281d510e91017432ca048a689d0901e7addb7f2e5f3b650067611b2bf2", Pod:"coredns-668d6bf9bc-vq8rb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf594afb50f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.545 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.545 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" iface="eth0" netns="" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.545 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.545 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.572 [INFO][5330] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.572 [INFO][5330] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.572 [INFO][5330] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.577 [WARNING][5330] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.577 [INFO][5330] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" HandleID="k8s-pod-network.3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--vq8rb-eth0" Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.578 [INFO][5330] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.584561 containerd[1475]: 2025-11-01 00:23:48.581 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274" Nov 1 00:23:48.584561 containerd[1475]: time="2025-11-01T00:23:48.584240736Z" level=info msg="TearDown network for sandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\" successfully" Nov 1 00:23:48.588461 containerd[1475]: time="2025-11-01T00:23:48.588306453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:48.588461 containerd[1475]: time="2025-11-01T00:23:48.588356293Z" level=info msg="RemovePodSandbox \"3c260bfce160fe97f880ea3ba31fd6c5ae747e6940176c23ece991b6eb972274\" returns successfully" Nov 1 00:23:48.589381 containerd[1475]: time="2025-11-01T00:23:48.589098525Z" level=info msg="StopPodSandbox for \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\"" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.635 [WARNING][5344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94db435-8568-49d2-8fbb-f0e2ac2a0138", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015", Pod:"calico-apiserver-6dd9845dcf-xh7sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb20d3092a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.636 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.636 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" iface="eth0" netns="" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.636 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.636 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.667 [INFO][5352] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.668 [INFO][5352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.668 [INFO][5352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.695 [WARNING][5352] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.695 [INFO][5352] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.698 [INFO][5352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.705469 containerd[1475]: 2025-11-01 00:23:48.700 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.707432 containerd[1475]: time="2025-11-01T00:23:48.706395443Z" level=info msg="TearDown network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\" successfully" Nov 1 00:23:48.707432 containerd[1475]: time="2025-11-01T00:23:48.706443673Z" level=info msg="StopPodSandbox for \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\" returns successfully" Nov 1 00:23:48.707432 containerd[1475]: time="2025-11-01T00:23:48.707102044Z" level=info msg="RemovePodSandbox for \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\"" Nov 1 00:23:48.707432 containerd[1475]: time="2025-11-01T00:23:48.707127294Z" level=info msg="Forcibly stopping sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\"" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.788 [WARNING][5367] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0", GenerateName:"calico-apiserver-6dd9845dcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94db435-8568-49d2-8fbb-f0e2ac2a0138", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dd9845dcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"192ab57081eb288834461b7fe95e5065a720ec3992c3eb0382e19d31be64a015", Pod:"calico-apiserver-6dd9845dcf-xh7sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb20d3092a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.789 [INFO][5367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.789 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" iface="eth0" netns="" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.789 [INFO][5367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.789 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.834 [INFO][5374] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.835 [INFO][5374] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.835 [INFO][5374] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.860 [WARNING][5374] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.860 [INFO][5374] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" HandleID="k8s-pod-network.ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Workload="172--237--159--149-k8s-calico--apiserver--6dd9845dcf--xh7sj-eth0" Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.862 [INFO][5374] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.872145 containerd[1475]: 2025-11-01 00:23:48.864 [INFO][5367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e" Nov 1 00:23:48.872145 containerd[1475]: time="2025-11-01T00:23:48.869860436Z" level=info msg="TearDown network for sandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\" successfully" Nov 1 00:23:48.875500 containerd[1475]: time="2025-11-01T00:23:48.875445797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:48.875633 containerd[1475]: time="2025-11-01T00:23:48.875613187Z" level=info msg="RemovePodSandbox \"ae9e13f65b339ce1606ff9b9bc58812a1bcd1c080a2c229e756948a339bfbb4e\" returns successfully" Nov 1 00:23:48.876915 containerd[1475]: time="2025-11-01T00:23:48.876852869Z" level=info msg="StopPodSandbox for \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\"" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.918 [WARNING][5389] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" WorkloadEndpoint="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.918 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.918 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" iface="eth0" netns="" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.919 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.919 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.949 [INFO][5396] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.950 [INFO][5396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.950 [INFO][5396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.958 [WARNING][5396] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.958 [INFO][5396] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.960 [INFO][5396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:48.975729 containerd[1475]: 2025-11-01 00:23:48.970 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:48.976095 containerd[1475]: time="2025-11-01T00:23:48.975787553Z" level=info msg="TearDown network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\" successfully" Nov 1 00:23:48.976095 containerd[1475]: time="2025-11-01T00:23:48.975846883Z" level=info msg="StopPodSandbox for \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\" returns successfully" Nov 1 00:23:48.976523 containerd[1475]: time="2025-11-01T00:23:48.976162534Z" level=info msg="RemovePodSandbox for \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\"" Nov 1 00:23:48.976523 containerd[1475]: time="2025-11-01T00:23:48.976190724Z" level=info msg="Forcibly stopping sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\"" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.037 [WARNING][5410] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" WorkloadEndpoint="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.037 [INFO][5410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.037 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" iface="eth0" netns="" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.037 [INFO][5410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.038 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.074 [INFO][5418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.075 [INFO][5418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.075 [INFO][5418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.084 [WARNING][5418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.084 [INFO][5418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" HandleID="k8s-pod-network.8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Workload="172--237--159--149-k8s-whisker--787b58b497--v8tsm-eth0" Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.085 [INFO][5418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:49.093638 containerd[1475]: 2025-11-01 00:23:49.088 [INFO][5410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd" Nov 1 00:23:49.093638 containerd[1475]: time="2025-11-01T00:23:49.092739630Z" level=info msg="TearDown network for sandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\" successfully" Nov 1 00:23:49.098108 containerd[1475]: time="2025-11-01T00:23:49.098076279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:49.098312 containerd[1475]: time="2025-11-01T00:23:49.098290110Z" level=info msg="RemovePodSandbox \"8441efd6a4b58df03fb3f453fec7ce6c62656616edc2419973fffdd5d36ff2cd\" returns successfully" Nov 1 00:23:49.099713 containerd[1475]: time="2025-11-01T00:23:49.099466062Z" level=info msg="StopPodSandbox for \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\"" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.140 [WARNING][5433] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0", GenerateName:"calico-apiserver-57cb94b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"629a8271-4389-4e02-9056-efb21f586504", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57cb94b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e", Pod:"calico-apiserver-57cb94b8fc-kxpw2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c4507ab101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.141 [INFO][5433] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.141 [INFO][5433] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" iface="eth0" netns="" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.141 [INFO][5433] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.141 [INFO][5433] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.185 [INFO][5440] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.186 [INFO][5440] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.186 [INFO][5440] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.193 [WARNING][5440] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.193 [INFO][5440] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.197 [INFO][5440] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:49.204318 containerd[1475]: 2025-11-01 00:23:49.200 [INFO][5433] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.205441 containerd[1475]: time="2025-11-01T00:23:49.204620985Z" level=info msg="TearDown network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\" successfully" Nov 1 00:23:49.205441 containerd[1475]: time="2025-11-01T00:23:49.204702705Z" level=info msg="StopPodSandbox for \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\" returns successfully" Nov 1 00:23:49.206364 containerd[1475]: time="2025-11-01T00:23:49.206097038Z" level=info msg="RemovePodSandbox for \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\"" Nov 1 00:23:49.206364 containerd[1475]: time="2025-11-01T00:23:49.206124458Z" level=info msg="Forcibly stopping sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\"" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.253 [WARNING][5454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0", GenerateName:"calico-apiserver-57cb94b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"629a8271-4389-4e02-9056-efb21f586504", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57cb94b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"8db746ababf7285eeb9750032eeccbc29479cc33f3a9d84a23fea9f3f6fe385e", Pod:"calico-apiserver-57cb94b8fc-kxpw2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c4507ab101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.254 [INFO][5454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.254 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" iface="eth0" netns="" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.254 [INFO][5454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.254 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.307 [INFO][5461] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.308 [INFO][5461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.308 [INFO][5461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.329 [WARNING][5461] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.329 [INFO][5461] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" HandleID="k8s-pod-network.37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Workload="172--237--159--149-k8s-calico--apiserver--57cb94b8fc--kxpw2-eth0" Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.331 [INFO][5461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:49.338412 containerd[1475]: 2025-11-01 00:23:49.334 [INFO][5454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b" Nov 1 00:23:49.340106 containerd[1475]: time="2025-11-01T00:23:49.339163009Z" level=info msg="TearDown network for sandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\" successfully" Nov 1 00:23:49.347348 containerd[1475]: time="2025-11-01T00:23:49.347222403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:49.347348 containerd[1475]: time="2025-11-01T00:23:49.347263913Z" level=info msg="RemovePodSandbox \"37d2d9cb1c74c7aa3e33106236da80f6275e70f802b8a47fb3d4b1ea4a8aaa7b\" returns successfully" Nov 1 00:23:49.348536 containerd[1475]: time="2025-11-01T00:23:49.347997395Z" level=info msg="StopPodSandbox for \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\"" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.417 [WARNING][5476] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83e6a8a-f958-47de-a7b8-4adca302cf7a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7", Pod:"coredns-668d6bf9bc-2lq56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98388399576", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.418 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.418 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" iface="eth0" netns="" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.418 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.418 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.461 [INFO][5483] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.462 [INFO][5483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.462 [INFO][5483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.469 [WARNING][5483] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.469 [INFO][5483] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.473 [INFO][5483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:49.480984 containerd[1475]: 2025-11-01 00:23:49.476 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.480984 containerd[1475]: time="2025-11-01T00:23:49.480678106Z" level=info msg="TearDown network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\" successfully" Nov 1 00:23:49.480984 containerd[1475]: time="2025-11-01T00:23:49.480711116Z" level=info msg="StopPodSandbox for \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\" returns successfully" Nov 1 00:23:49.481447 containerd[1475]: time="2025-11-01T00:23:49.481349127Z" level=info msg="RemovePodSandbox for \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\"" Nov 1 00:23:49.481447 containerd[1475]: time="2025-11-01T00:23:49.481380027Z" level=info msg="Forcibly stopping sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\"" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.536 [WARNING][5497] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83e6a8a-f958-47de-a7b8-4adca302cf7a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-159-149", ContainerID:"d413ea199e3e0306cc9aef70fe357470f2d54e0eaeaf6104f7a5b619646dc3b7", Pod:"coredns-668d6bf9bc-2lq56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98388399576", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.537 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.537 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" iface="eth0" netns="" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.537 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.537 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.579 [INFO][5504] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.580 [INFO][5504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.581 [INFO][5504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.589 [WARNING][5504] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.589 [INFO][5504] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" HandleID="k8s-pod-network.4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Workload="172--237--159--149-k8s-coredns--668d6bf9bc--2lq56-eth0" Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.590 [INFO][5504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:49.602253 containerd[1475]: 2025-11-01 00:23:49.594 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94" Nov 1 00:23:49.602253 containerd[1475]: time="2025-11-01T00:23:49.601119516Z" level=info msg="TearDown network for sandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\" successfully" Nov 1 00:23:49.605776 containerd[1475]: time="2025-11-01T00:23:49.605578964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:49.605939 containerd[1475]: time="2025-11-01T00:23:49.605837904Z" level=info msg="RemovePodSandbox \"4275343b7add66e82d89f95d8ee5b4a6eecc31218d6e196341a5b3dc6f12fe94\" returns successfully" Nov 1 00:23:54.564549 kubelet[2543]: E1101 00:23:54.562864 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:23:54.565764 kubelet[2543]: E1101 00:23:54.565380 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:23:55.563811 kubelet[2543]: E1101 00:23:55.563454 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:23:57.566066 kubelet[2543]: E1101 00:23:57.565856 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:23:57.575504 kubelet[2543]: E1101 00:23:57.573157 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:23:58.562787 containerd[1475]: time="2025-11-01T00:23:58.562578980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:58.564649 kubelet[2543]: E1101 00:23:58.563972 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:23:58.697162 containerd[1475]: time="2025-11-01T00:23:58.697105517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:58.698263 containerd[1475]: time="2025-11-01T00:23:58.698057365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:58.698263 containerd[1475]: time="2025-11-01T00:23:58.698136015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:58.698362 kubelet[2543]: E1101 00:23:58.698301 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:58.698362 kubelet[2543]: E1101 00:23:58.698346 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:58.698832 kubelet[2543]: E1101 00:23:58.698452 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4826801892e941d495724508c51c8278,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:58.700685 containerd[1475]: time="2025-11-01T00:23:58.700647312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:58.823426 containerd[1475]: time="2025-11-01T00:23:58.823223404Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:58.824669 containerd[1475]: time="2025-11-01T00:23:58.824444642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:58.824669 containerd[1475]: time="2025-11-01T00:23:58.824537192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:58.824845 kubelet[2543]: E1101 00:23:58.824714 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:58.824845 kubelet[2543]: E1101 00:23:58.824768 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:58.824982 kubelet[2543]: E1101 00:23:58.824885 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:58.826284 kubelet[2543]: E1101 00:23:58.826223 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:24:08.562862 containerd[1475]: time="2025-11-01T00:24:08.562762160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:08.932884 containerd[1475]: time="2025-11-01T00:24:08.932685456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:08.934526 containerd[1475]: time="2025-11-01T00:24:08.933884435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:08.934526 containerd[1475]: time="2025-11-01T00:24:08.933959974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:08.934599 kubelet[2543]: E1101 00:24:08.934220 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:08.934599 kubelet[2543]: E1101 00:24:08.934267 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:08.934599 kubelet[2543]: E1101 00:24:08.934366 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llcsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f84c65659-5v5f2_calico-system(2e77087b-330c-4d1c-8e6e-77f7214641fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:08.936198 kubelet[2543]: E1101 00:24:08.936150 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:24:09.566881 containerd[1475]: time="2025-11-01T00:24:09.566804416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:09.696159 containerd[1475]: time="2025-11-01T00:24:09.696095912Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:09.697115 containerd[1475]: time="2025-11-01T00:24:09.697078401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:09.697204 containerd[1475]: time="2025-11-01T00:24:09.697145651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:09.697607 kubelet[2543]: E1101 00:24:09.697550 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:09.697762 kubelet[2543]: E1101 00:24:09.697726 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:09.698130 kubelet[2543]: E1101 00:24:09.698081 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:09.698879 containerd[1475]: time="2025-11-01T00:24:09.698790560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:09.844923 containerd[1475]: time="2025-11-01T00:24:09.844671210Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:09.845977 containerd[1475]: time="2025-11-01T00:24:09.845930139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:09.846090 containerd[1475]: time="2025-11-01T00:24:09.846004829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:09.846198 kubelet[2543]: E1101 00:24:09.846149 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:09.846315 kubelet[2543]: E1101 00:24:09.846199 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:09.846574 kubelet[2543]: E1101 00:24:09.846383 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-px4c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57cb94b8fc-kxpw2_calico-apiserver(629a8271-4389-4e02-9056-efb21f586504): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:09.847436 containerd[1475]: time="2025-11-01T00:24:09.847143958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:09.847813 kubelet[2543]: E1101 00:24:09.847728 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:24:09.991759 containerd[1475]: time="2025-11-01T00:24:09.991452209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:09.992515 containerd[1475]: time="2025-11-01T00:24:09.992460068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:09.992739 containerd[1475]: time="2025-11-01T00:24:09.992612158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:09.993336 kubelet[2543]: E1101 00:24:09.993280 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:09.994659 kubelet[2543]: E1101 00:24:09.993341 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:09.994659 kubelet[2543]: E1101 00:24:09.993465 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:09.995003 kubelet[2543]: E1101 00:24:09.994918 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:24:10.562811 containerd[1475]: time="2025-11-01T00:24:10.562752046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:10.711241 containerd[1475]: time="2025-11-01T00:24:10.711165147Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:10.712513 containerd[1475]: time="2025-11-01T00:24:10.712454986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:10.712636 containerd[1475]: time="2025-11-01T00:24:10.712576156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:10.712894 kubelet[2543]: E1101 00:24:10.712833 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:10.712974 kubelet[2543]: E1101 00:24:10.712902 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:10.713738 kubelet[2543]: E1101 00:24:10.713657 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sntd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lbk9p_calico-system(590f4e1f-e213-4b72-aab5-d1ab9906213b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:10.715191 kubelet[2543]: E1101 00:24:10.715147 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:24:11.565399 containerd[1475]: time="2025-11-01T00:24:11.564534324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:11.566305 kubelet[2543]: E1101 00:24:11.566282 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:11.692971 containerd[1475]: time="2025-11-01T00:24:11.692695518Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:11.694256 containerd[1475]: time="2025-11-01T00:24:11.694215196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:11.694418 containerd[1475]: time="2025-11-01T00:24:11.694306666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:11.694467 kubelet[2543]: E1101 00:24:11.694429 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:11.694572 kubelet[2543]: E1101 00:24:11.694503 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:11.694689 kubelet[2543]: E1101 00:24:11.694628 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-485nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-xh7sj_calico-apiserver(d94db435-8568-49d2-8fbb-f0e2ac2a0138): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:11.696254 kubelet[2543]: E1101 00:24:11.696182 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:24:12.563245 containerd[1475]: time="2025-11-01T00:24:12.563069279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:12.738550 containerd[1475]: time="2025-11-01T00:24:12.738470794Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:12.739988 containerd[1475]: time="2025-11-01T00:24:12.739847342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:12.739988 containerd[1475]: time="2025-11-01T00:24:12.739941402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:12.741612 kubelet[2543]: E1101 00:24:12.740149 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:12.741612 kubelet[2543]: E1101 00:24:12.740197 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:12.741612 kubelet[2543]: E1101 00:24:12.740342 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-vd6vw_calico-apiserver(2f5e2ac6-875f-4179-9d8d-01e4d536c5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:12.742370 kubelet[2543]: E1101 00:24:12.742296 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:24:14.564781 kubelet[2543]: E1101 00:24:14.563940 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:14.566636 kubelet[2543]: E1101 00:24:14.566568 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:24:20.564344 kubelet[2543]: E1101 00:24:20.563602 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:24:21.566733 kubelet[2543]: E1101 00:24:21.566669 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:24:23.562511 kubelet[2543]: E1101 00:24:23.561719 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:24:24.561739 kubelet[2543]: E1101 00:24:24.561460 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:24.562721 kubelet[2543]: E1101 00:24:24.562649 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:24:25.561916 kubelet[2543]: E1101 00:24:25.561865 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:24:27.564544 kubelet[2543]: E1101 00:24:27.564359 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:24:28.560996 kubelet[2543]: E1101 00:24:28.560946 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:28.563850 kubelet[2543]: E1101 00:24:28.563807 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:24:29.561962 kubelet[2543]: E1101 00:24:29.561887 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:33.566628 kubelet[2543]: E1101 00:24:33.566549 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:24:34.563727 kubelet[2543]: E1101 00:24:34.563667 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:24:35.562233 kubelet[2543]: E1101 00:24:35.561579 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:35.563560 kubelet[2543]: E1101 00:24:35.563385 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:24:35.563560 kubelet[2543]: E1101 00:24:35.563453 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:24:38.562654 kubelet[2543]: E1101 00:24:38.562566 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:24:39.567328 kubelet[2543]: E1101 00:24:39.567283 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:24:42.563429 containerd[1475]: time="2025-11-01T00:24:42.563028593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:42.705021 containerd[1475]: time="2025-11-01T00:24:42.704918040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:42.705993 containerd[1475]: time="2025-11-01T00:24:42.705953210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:42.706068 containerd[1475]: time="2025-11-01T00:24:42.706028300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:24:42.706199 kubelet[2543]: E1101 00:24:42.706156 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:42.706569 kubelet[2543]: E1101 00:24:42.706207 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:42.706569 kubelet[2543]: E1101 00:24:42.706313 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4826801892e941d495724508c51c8278,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:42.709194 containerd[1475]: time="2025-11-01T00:24:42.708214299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:42.867166 containerd[1475]: time="2025-11-01T00:24:42.866670078Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:42.868267 containerd[1475]: time="2025-11-01T00:24:42.868042738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:42.868267 containerd[1475]: time="2025-11-01T00:24:42.868089708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:42.868525 kubelet[2543]: E1101 00:24:42.868381 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:42.868525 kubelet[2543]: E1101 00:24:42.868455 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:42.868937 kubelet[2543]: E1101 00:24:42.868670 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5cpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546cfc797-vghbp_calico-system(d81fb5e0-40d2-4201-bb4f-f47b80daaf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:42.870131 kubelet[2543]: E1101 00:24:42.870093 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:24:43.275559 systemd[1]: Started sshd@7-172.237.159.149:22-139.178.68.195:35510.service - OpenSSH per-connection server daemon (139.178.68.195:35510). Nov 1 00:24:43.594393 sshd[5576]: Accepted publickey for core from 139.178.68.195 port 35510 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:43.595632 sshd[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:43.600737 systemd-logind[1450]: New session 8 of user core. Nov 1 00:24:43.606635 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:24:43.909968 sshd[5576]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:43.915136 systemd[1]: sshd@7-172.237.159.149:22-139.178.68.195:35510.service: Deactivated successfully. Nov 1 00:24:43.917210 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:24:43.918405 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:24:43.920665 systemd-logind[1450]: Removed session 8. Nov 1 00:24:45.564362 kubelet[2543]: E1101 00:24:45.563250 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:24:46.562245 kubelet[2543]: E1101 00:24:46.562197 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:24:48.563738 kubelet[2543]: E1101 00:24:48.562732 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:24:48.977794 systemd[1]: Started sshd@8-172.237.159.149:22-139.178.68.195:35526.service - OpenSSH per-connection server daemon (139.178.68.195:35526). Nov 1 00:24:49.302511 sshd[5599]: Accepted publickey for core from 139.178.68.195 port 35526 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:49.305169 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:49.312477 systemd-logind[1450]: New session 9 of user core. Nov 1 00:24:49.318799 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:24:49.565347 kubelet[2543]: E1101 00:24:49.565229 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:24:49.656315 sshd[5599]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:49.664199 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:24:49.668240 systemd[1]: sshd@8-172.237.159.149:22-139.178.68.195:35526.service: Deactivated successfully. Nov 1 00:24:49.671803 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:24:49.674799 systemd-logind[1450]: Removed session 9. Nov 1 00:24:50.563936 kubelet[2543]: E1101 00:24:50.563674 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:24:50.564253 containerd[1475]: time="2025-11-01T00:24:50.563939411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:50.700006 containerd[1475]: time="2025-11-01T00:24:50.699866219Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:50.702138 containerd[1475]: time="2025-11-01T00:24:50.702097188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:50.702289 containerd[1475]: time="2025-11-01T00:24:50.702204028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:50.702398 kubelet[2543]: E1101 00:24:50.702357 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:50.702865 kubelet[2543]: E1101 00:24:50.702408 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:50.702865 kubelet[2543]: E1101 00:24:50.702594 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llcsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f84c65659-5v5f2_calico-system(2e77087b-330c-4d1c-8e6e-77f7214641fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:50.704601 kubelet[2543]: E1101 00:24:50.704550 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:24:54.733719 systemd[1]: Started sshd@9-172.237.159.149:22-139.178.68.195:51278.service - OpenSSH per-connection server daemon (139.178.68.195:51278). Nov 1 00:24:55.066021 sshd[5617]: Accepted publickey for core from 139.178.68.195 port 51278 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:55.068182 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:55.076560 systemd-logind[1450]: New session 10 of user core. Nov 1 00:24:55.081711 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:24:55.429829 sshd[5617]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:55.433384 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:24:55.434951 systemd[1]: sshd@9-172.237.159.149:22-139.178.68.195:51278.service: Deactivated successfully. Nov 1 00:24:55.437270 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:24:55.440328 systemd-logind[1450]: Removed session 10. Nov 1 00:24:55.492709 systemd[1]: Started sshd@10-172.237.159.149:22-139.178.68.195:51282.service - OpenSSH per-connection server daemon (139.178.68.195:51282). Nov 1 00:24:55.562083 kubelet[2543]: E1101 00:24:55.562048 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:55.562651 kubelet[2543]: E1101 00:24:55.562048 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:24:55.843383 sshd[5633]: Accepted publickey for core from 139.178.68.195 port 51282 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:55.847029 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:55.853781 systemd-logind[1450]: New session 11 of user core. Nov 1 00:24:55.858618 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:24:56.252590 sshd[5633]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:56.256390 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:24:56.259790 systemd[1]: sshd@10-172.237.159.149:22-139.178.68.195:51282.service: Deactivated successfully. Nov 1 00:24:56.263611 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:24:56.265869 systemd-logind[1450]: Removed session 11. Nov 1 00:24:56.315718 systemd[1]: Started sshd@11-172.237.159.149:22-139.178.68.195:51284.service - OpenSSH per-connection server daemon (139.178.68.195:51284). Nov 1 00:24:56.653105 sshd[5648]: Accepted publickey for core from 139.178.68.195 port 51284 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:56.655574 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:56.662291 systemd-logind[1450]: New session 12 of user core. Nov 1 00:24:56.668670 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:24:56.988200 sshd[5648]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:56.998346 systemd[1]: sshd@11-172.237.159.149:22-139.178.68.195:51284.service: Deactivated successfully. Nov 1 00:24:56.999467 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:24:57.008296 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:24:57.015238 systemd-logind[1450]: Removed session 12. Nov 1 00:24:57.573774 containerd[1475]: time="2025-11-01T00:24:57.573722481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:57.720622 containerd[1475]: time="2025-11-01T00:24:57.720429642Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:57.721392 containerd[1475]: time="2025-11-01T00:24:57.721307332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:57.721611 containerd[1475]: time="2025-11-01T00:24:57.721369551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:57.721954 kubelet[2543]: E1101 00:24:57.721827 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:57.721954 kubelet[2543]: E1101 00:24:57.721923 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:57.723002 kubelet[2543]: E1101 00:24:57.722859 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:57.725507 containerd[1475]: time="2025-11-01T00:24:57.725311490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:57.892019 containerd[1475]: time="2025-11-01T00:24:57.891463844Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:57.892728 containerd[1475]: time="2025-11-01T00:24:57.892591074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:57.892728 containerd[1475]: time="2025-11-01T00:24:57.892626634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:57.892833 kubelet[2543]: E1101 00:24:57.892807 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:57.892896 kubelet[2543]: E1101 00:24:57.892847 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:57.892985 kubelet[2543]: E1101 00:24:57.892939 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8zcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nw6x5_calico-system(42a33fba-271a-4a52-bba9-06d9d0613c0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:57.894344 kubelet[2543]: E1101 00:24:57.894276 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:24:58.564139 kubelet[2543]: E1101 00:24:58.564007 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:25:00.563379 containerd[1475]: time="2025-11-01T00:25:00.563265167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:00.708167 containerd[1475]: time="2025-11-01T00:25:00.707660460Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:00.708871 containerd[1475]: time="2025-11-01T00:25:00.708731890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:00.709670 containerd[1475]: time="2025-11-01T00:25:00.709107600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:00.709756 kubelet[2543]: E1101 00:25:00.709696 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:00.709756 kubelet[2543]: E1101 00:25:00.709742 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:00.710117 kubelet[2543]: E1101 00:25:00.709848 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-vd6vw_calico-apiserver(2f5e2ac6-875f-4179-9d8d-01e4d536c5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:00.711920 kubelet[2543]: E1101 00:25:00.711870 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:25:01.563221 kubelet[2543]: E1101 00:25:01.562298 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:25:01.565295 containerd[1475]: time="2025-11-01T00:25:01.564585158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:01.699672 containerd[1475]: time="2025-11-01T00:25:01.699575325Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:01.700743 containerd[1475]: time="2025-11-01T00:25:01.700562314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:01.700743 containerd[1475]: time="2025-11-01T00:25:01.700657704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:01.701623 kubelet[2543]: E1101 00:25:01.700868 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:01.701623 kubelet[2543]: E1101 00:25:01.700918 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:01.701623 kubelet[2543]: E1101 00:25:01.701040 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-px4c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57cb94b8fc-kxpw2_calico-apiserver(629a8271-4389-4e02-9056-efb21f586504): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:01.702512 kubelet[2543]: E1101 00:25:01.702457 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:25:02.050806 systemd[1]: Started sshd@12-172.237.159.149:22-139.178.68.195:51288.service - OpenSSH per-connection server daemon (139.178.68.195:51288). Nov 1 00:25:02.376631 sshd[5697]: Accepted publickey for core from 139.178.68.195 port 51288 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:02.378420 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:02.387556 systemd-logind[1450]: New session 13 of user core. Nov 1 00:25:02.393609 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:25:02.682556 sshd[5697]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:02.687383 systemd[1]: sshd@12-172.237.159.149:22-139.178.68.195:51288.service: Deactivated successfully. Nov 1 00:25:02.690234 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:25:02.691797 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:25:02.693289 systemd-logind[1450]: Removed session 13. Nov 1 00:25:02.745756 systemd[1]: Started sshd@13-172.237.159.149:22-139.178.68.195:51302.service - OpenSSH per-connection server daemon (139.178.68.195:51302). Nov 1 00:25:03.063724 sshd[5710]: Accepted publickey for core from 139.178.68.195 port 51302 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:03.065880 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:03.072891 systemd-logind[1450]: New session 14 of user core. Nov 1 00:25:03.079638 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:25:03.564902 containerd[1475]: time="2025-11-01T00:25:03.564547484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:25:03.707548 containerd[1475]: time="2025-11-01T00:25:03.707506599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:03.709331 containerd[1475]: time="2025-11-01T00:25:03.708827439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:25:03.709331 containerd[1475]: time="2025-11-01T00:25:03.708897119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:03.709852 kubelet[2543]: E1101 00:25:03.709623 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:03.709852 kubelet[2543]: E1101 00:25:03.709667 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:03.709852 kubelet[2543]: E1101 00:25:03.709799 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sntd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lbk9p_calico-system(590f4e1f-e213-4b72-aab5-d1ab9906213b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:03.712223 kubelet[2543]: E1101 00:25:03.711669 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:25:03.712769 sshd[5710]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:03.715932 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:25:03.717254 systemd[1]: sshd@13-172.237.159.149:22-139.178.68.195:51302.service: Deactivated successfully. Nov 1 00:25:03.724645 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:25:03.728223 systemd-logind[1450]: Removed session 14. Nov 1 00:25:03.779330 systemd[1]: Started sshd@14-172.237.159.149:22-139.178.68.195:40718.service - OpenSSH per-connection server daemon (139.178.68.195:40718). Nov 1 00:25:04.109512 sshd[5728]: Accepted publickey for core from 139.178.68.195 port 40718 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:04.111624 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:04.116155 systemd-logind[1450]: New session 15 of user core. Nov 1 00:25:04.121641 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:25:05.069620 sshd[5728]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:05.075633 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:25:05.075944 systemd[1]: sshd@14-172.237.159.149:22-139.178.68.195:40718.service: Deactivated successfully. Nov 1 00:25:05.079722 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:25:05.086019 systemd-logind[1450]: Removed session 15. Nov 1 00:25:05.136751 systemd[1]: Started sshd@15-172.237.159.149:22-139.178.68.195:40722.service - OpenSSH per-connection server daemon (139.178.68.195:40722). Nov 1 00:25:05.467919 sshd[5747]: Accepted publickey for core from 139.178.68.195 port 40722 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:05.471044 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:05.478774 systemd-logind[1450]: New session 16 of user core. Nov 1 00:25:05.485671 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:25:05.578435 containerd[1475]: time="2025-11-01T00:25:05.578386734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:05.738776 containerd[1475]: time="2025-11-01T00:25:05.737715156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:05.738776 containerd[1475]: time="2025-11-01T00:25:05.738514856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:05.738776 containerd[1475]: time="2025-11-01T00:25:05.738581546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:05.739029 kubelet[2543]: E1101 00:25:05.738756 2543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:05.739029 kubelet[2543]: E1101 00:25:05.738799 2543 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:05.739029 kubelet[2543]: E1101 00:25:05.738901 2543 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-485nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dd9845dcf-xh7sj_calico-apiserver(d94db435-8568-49d2-8fbb-f0e2ac2a0138): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:05.740081 kubelet[2543]: E1101 00:25:05.740000 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:25:05.905304 sshd[5747]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:05.911808 systemd[1]: sshd@15-172.237.159.149:22-139.178.68.195:40722.service: Deactivated successfully. Nov 1 00:25:05.912171 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:25:05.914612 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:25:05.918145 systemd-logind[1450]: Removed session 16. Nov 1 00:25:05.977801 systemd[1]: Started sshd@16-172.237.159.149:22-139.178.68.195:40736.service - OpenSSH per-connection server daemon (139.178.68.195:40736). Nov 1 00:25:06.313514 sshd[5758]: Accepted publickey for core from 139.178.68.195 port 40736 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:06.314290 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:06.321028 systemd-logind[1450]: New session 17 of user core. Nov 1 00:25:06.327619 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:25:06.646764 sshd[5758]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:06.651053 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:25:06.651772 systemd[1]: sshd@16-172.237.159.149:22-139.178.68.195:40736.service: Deactivated successfully. Nov 1 00:25:06.656059 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:25:06.659361 systemd-logind[1450]: Removed session 17. Nov 1 00:25:11.567848 kubelet[2543]: E1101 00:25:11.567323 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86" Nov 1 00:25:11.567848 kubelet[2543]: E1101 00:25:11.567418 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:25:11.567848 kubelet[2543]: E1101 00:25:11.567475 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nw6x5" podUID="42a33fba-271a-4a52-bba9-06d9d0613c0c" Nov 1 00:25:11.705718 systemd[1]: Started sshd@17-172.237.159.149:22-139.178.68.195:40750.service - OpenSSH per-connection server daemon (139.178.68.195:40750). Nov 1 00:25:12.040311 sshd[5772]: Accepted publickey for core from 139.178.68.195 port 40750 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:12.042101 sshd[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:12.051337 systemd-logind[1450]: New session 18 of user core. Nov 1 00:25:12.058752 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:25:12.367828 sshd[5772]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:12.374972 systemd[1]: sshd@17-172.237.159.149:22-139.178.68.195:40750.service: Deactivated successfully. Nov 1 00:25:12.380549 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:25:12.382573 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:25:12.384243 systemd-logind[1450]: Removed session 18. Nov 1 00:25:13.566039 kubelet[2543]: E1101 00:25:13.565917 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:25:14.563668 kubelet[2543]: E1101 00:25:14.563125 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lbk9p" podUID="590f4e1f-e213-4b72-aab5-d1ab9906213b" Nov 1 00:25:14.564064 kubelet[2543]: E1101 00:25:14.563858 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57cb94b8fc-kxpw2" podUID="629a8271-4389-4e02-9056-efb21f586504" Nov 1 00:25:16.561347 kubelet[2543]: E1101 00:25:16.561301 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Nov 1 00:25:17.431236 systemd[1]: Started sshd@18-172.237.159.149:22-139.178.68.195:33746.service - OpenSSH per-connection server daemon (139.178.68.195:33746). Nov 1 00:25:17.771766 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 33746 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:17.774011 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:17.779121 systemd-logind[1450]: New session 19 of user core. Nov 1 00:25:17.787773 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:25:18.106952 sshd[5785]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:18.111597 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:25:18.112916 systemd[1]: sshd@18-172.237.159.149:22-139.178.68.195:33746.service: Deactivated successfully. Nov 1 00:25:18.114843 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:25:18.115826 systemd-logind[1450]: Removed session 19. Nov 1 00:25:21.564516 kubelet[2543]: E1101 00:25:21.564426 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-xh7sj" podUID="d94db435-8568-49d2-8fbb-f0e2ac2a0138" Nov 1 00:25:23.170535 systemd[1]: Started sshd@19-172.237.159.149:22-139.178.68.195:37998.service - OpenSSH per-connection server daemon (139.178.68.195:37998). Nov 1 00:25:23.496417 sshd[5800]: Accepted publickey for core from 139.178.68.195 port 37998 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:25:23.497231 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:23.503856 systemd-logind[1450]: New session 20 of user core. Nov 1 00:25:23.511650 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:25:23.563269 kubelet[2543]: E1101 00:25:23.563219 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dd9845dcf-vd6vw" podUID="2f5e2ac6-875f-4179-9d8d-01e4d536c5f3" Nov 1 00:25:23.828735 sshd[5800]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:23.833317 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:25:23.834260 systemd[1]: sshd@19-172.237.159.149:22-139.178.68.195:37998.service: Deactivated successfully. Nov 1 00:25:23.839471 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:25:23.841835 systemd-logind[1450]: Removed session 20. Nov 1 00:25:25.567096 kubelet[2543]: E1101 00:25:25.567037 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f84c65659-5v5f2" podUID="2e77087b-330c-4d1c-8e6e-77f7214641fd" Nov 1 00:25:25.569366 kubelet[2543]: E1101 00:25:25.569122 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546cfc797-vghbp" podUID="d81fb5e0-40d2-4201-bb4f-f47b80daaf86"