Nov 1 00:24:22.055805 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:24:22.055828 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:24:22.055836 kernel: BIOS-provided physical RAM map: Nov 1 00:24:22.055843 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 1 00:24:22.055848 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 1 00:24:22.055857 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:24:22.055864 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 1 00:24:22.055870 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 1 00:24:22.055875 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:24:22.055881 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:24:22.055887 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:24:22.055892 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:24:22.055898 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 1 00:24:22.055906 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:24:22.055913 kernel: NX (Execute Disable) protection: active Nov 1 00:24:22.055919 kernel: APIC: Static calls initialized Nov 1 00:24:22.055925 kernel: SMBIOS 2.8 present. Nov 1 00:24:22.055931 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 1 00:24:22.055938 kernel: Hypervisor detected: KVM Nov 1 00:24:22.055946 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:24:22.055952 kernel: kvm-clock: using sched offset of 5810140620 cycles Nov 1 00:24:22.055959 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:24:22.055965 kernel: tsc: Detected 1999.997 MHz processor Nov 1 00:24:22.055971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:24:22.055978 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:24:22.055984 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 1 00:24:22.055990 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:24:22.055997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:24:22.056005 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 1 00:24:22.056011 kernel: Using GB pages for direct mapping Nov 1 00:24:22.056018 kernel: ACPI: Early table checksum verification disabled Nov 1 00:24:22.056024 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 1 00:24:22.056030 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056036 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056042 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056048 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:24:22.056054 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056063 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056069 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056076 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056274 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 1 00:24:22.056281 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 1 00:24:22.056287 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:24:22.056296 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 1 00:24:22.056303 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 1 00:24:22.056309 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 1 00:24:22.056315 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 1 00:24:22.056322 kernel: No NUMA configuration found Nov 1 00:24:22.056328 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 1 00:24:22.056334 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Nov 1 00:24:22.056341 kernel: Zone ranges: Nov 1 00:24:22.056350 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:24:22.056356 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:24:22.056362 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:24:22.056369 kernel: Movable zone start for each node Nov 1 00:24:22.056375 kernel: Early memory node ranges Nov 1 00:24:22.056381 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:24:22.056388 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 1 00:24:22.056394 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:24:22.056400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 1 00:24:22.056409 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:24:22.056415 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:24:22.056422 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 1 00:24:22.056428 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:24:22.056435 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:24:22.056441 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:24:22.056448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:24:22.056454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:24:22.056461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:24:22.056470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:24:22.056477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:24:22.056483 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:24:22.056489 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:24:22.056496 kernel: TSC deadline timer available Nov 1 00:24:22.056502 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:24:22.056508 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:24:22.056515 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:24:22.056521 kernel: kvm-guest: setup PV sched yield Nov 1 00:24:22.056527 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:24:22.056536 kernel: Booting paravirtualized kernel on KVM Nov 1 00:24:22.056543 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:24:22.056549 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:24:22.056556 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:24:22.056562 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:24:22.056569 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:24:22.056575 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:24:22.056582 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:24:22.056589 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:24:22.056598 kernel: random: crng init done Nov 1 00:24:22.056604 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:24:22.056611 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:24:22.056617 kernel: Fallback order for Node 0: 0 Nov 1 00:24:22.056624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 1 00:24:22.056630 kernel: Policy zone: Normal Nov 1 00:24:22.056636 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:24:22.056643 kernel: software IO TLB: area num 2. Nov 1 00:24:22.056652 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 227308K reserved, 0K cma-reserved) Nov 1 00:24:22.056659 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:24:22.056665 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:24:22.056672 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:24:22.056678 kernel: Dynamic Preempt: voluntary Nov 1 00:24:22.056685 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:24:22.056692 kernel: rcu: RCU event tracing is enabled. Nov 1 00:24:22.056699 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:24:22.056705 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:24:22.056715 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:24:22.056721 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:24:22.056759 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:24:22.056769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:24:22.056776 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:24:22.056782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:24:22.056789 kernel: Console: colour VGA+ 80x25 Nov 1 00:24:22.056795 kernel: printk: console [tty0] enabled Nov 1 00:24:22.056801 kernel: printk: console [ttyS0] enabled Nov 1 00:24:22.056813 kernel: ACPI: Core revision 20230628 Nov 1 00:24:22.056820 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:24:22.056826 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:24:22.056833 kernel: x2apic enabled Nov 1 00:24:22.056850 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:24:22.056860 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:24:22.056866 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:24:22.056873 kernel: kvm-guest: setup PV IPIs Nov 1 00:24:22.056880 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:24:22.056886 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:24:22.056893 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 1 00:24:22.056900 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:24:22.056910 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:24:22.056916 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:24:22.056923 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:24:22.056930 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:24:22.056940 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:24:22.056946 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:24:22.056953 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:24:22.056960 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:24:22.056967 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:24:22.056974 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:24:22.056981 kernel: active return thunk: srso_alias_return_thunk Nov 1 00:24:22.056988 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:24:22.056994 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 1 00:24:22.057004 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:24:22.057011 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:24:22.057018 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:24:22.057025 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:24:22.057032 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 00:24:22.057038 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:24:22.057045 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 1 00:24:22.057052 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 1 00:24:22.057061 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:24:22.057068 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:24:22.057075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:24:22.057082 kernel: landlock: Up and running. Nov 1 00:24:22.057088 kernel: SELinux: Initializing. Nov 1 00:24:22.057095 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.057102 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.057109 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 1 00:24:22.057115 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:24:22.057126 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:24:22.057133 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:24:22.057139 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:24:22.057146 kernel: ... version: 0 Nov 1 00:24:22.057153 kernel: ... bit width: 48 Nov 1 00:24:22.057159 kernel: ... generic registers: 6 Nov 1 00:24:22.057166 kernel: ... value mask: 0000ffffffffffff Nov 1 00:24:22.057173 kernel: ... max period: 00007fffffffffff Nov 1 00:24:22.057180 kernel: ... fixed-purpose events: 0 Nov 1 00:24:22.057189 kernel: ... event mask: 000000000000003f Nov 1 00:24:22.057196 kernel: signal: max sigframe size: 3376 Nov 1 00:24:22.057203 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:24:22.057210 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:24:22.057217 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:24:22.057224 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:24:22.057230 kernel: .... node #0, CPUs: #1 Nov 1 00:24:22.057237 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:24:22.057244 kernel: smpboot: Max logical packages: 1 Nov 1 00:24:22.057250 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 1 00:24:22.057459 kernel: devtmpfs: initialized Nov 1 00:24:22.057466 kernel: x86/mm: Memory block size: 128MB Nov 1 00:24:22.057473 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:24:22.057480 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:24:22.057486 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:24:22.057493 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:24:22.057499 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:24:22.057506 kernel: audit: type=2000 audit(1761956660.757:1): state=initialized audit_enabled=0 res=1 Nov 1 00:24:22.057513 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:24:22.057522 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:24:22.057529 kernel: cpuidle: using governor menu Nov 1 00:24:22.057536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:24:22.057542 kernel: dca service started, version 1.12.1 Nov 1 00:24:22.057549 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:24:22.057556 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:24:22.057563 kernel: PCI: Using configuration type 1 for base access Nov 1 00:24:22.057569 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:24:22.057580 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:24:22.057587 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:24:22.057593 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:24:22.057600 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:24:22.057607 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:24:22.057613 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:24:22.057620 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:24:22.057627 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:24:22.057633 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:24:22.057643 kernel: ACPI: Interpreter enabled Nov 1 00:24:22.057650 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:24:22.057657 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:24:22.057664 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:24:22.057670 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:24:22.057677 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:24:22.057684 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:24:22.058921 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:24:22.059072 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:24:22.059204 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:24:22.059214 kernel: PCI host bridge to bus 0000:00 Nov 1 00:24:22.059344 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:24:22.059510 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:24:22.059636 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:24:22.059797 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 00:24:22.059928 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:24:22.060043 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 1 00:24:22.060157 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:24:22.060302 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:24:22.060440 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:24:22.060568 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:24:22.060694 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:24:22.062885 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:24:22.063023 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:24:22.063170 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:24:22.063301 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:24:22.063431 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:24:22.063558 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:24:22.063692 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:24:22.063870 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:24:22.064004 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:24:22.064305 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:24:22.064432 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:24:22.064567 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:24:22.064693 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:24:22.064892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:24:22.065026 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 1 00:24:22.065152 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:24:22.065288 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:24:22.065414 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:24:22.065424 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:24:22.065431 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:24:22.065443 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:24:22.065450 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:24:22.065457 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:24:22.065464 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:24:22.065470 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:24:22.065477 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:24:22.065484 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:24:22.065491 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:24:22.065498 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:24:22.065507 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:24:22.065515 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:24:22.065521 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:24:22.065528 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:24:22.065535 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:24:22.065542 kernel: iommu: Default domain type: Translated Nov 1 00:24:22.065549 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:24:22.065556 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:24:22.065562 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:24:22.065572 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 1 00:24:22.065578 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 1 00:24:22.065704 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:24:22.065887 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:24:22.066020 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:24:22.066030 kernel: vgaarb: loaded Nov 1 00:24:22.066038 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:24:22.066044 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:24:22.066051 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:24:22.066064 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:24:22.066071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:24:22.066078 kernel: pnp: PnP ACPI init Nov 1 00:24:22.066212 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:24:22.066223 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:24:22.066230 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:24:22.066437 kernel: NET: Registered PF_INET protocol family Nov 1 00:24:22.066444 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:24:22.066454 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:24:22.066462 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:24:22.066468 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:24:22.066475 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:24:22.066482 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:24:22.066489 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.066496 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.066503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:24:22.066510 kernel: NET: Registered PF_XDP protocol family Nov 1 00:24:22.066632 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:24:22.067137 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:24:22.067454 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:24:22.067569 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 00:24:22.067795 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:24:22.067933 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 1 00:24:22.067944 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:24:22.067951 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:24:22.067963 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 1 00:24:22.067970 kernel: Initialise system trusted keyrings Nov 1 00:24:22.067977 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:24:22.067984 kernel: Key type asymmetric registered Nov 1 00:24:22.067990 kernel: Asymmetric key parser 'x509' registered Nov 1 00:24:22.067997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:24:22.068004 kernel: io scheduler mq-deadline registered Nov 1 00:24:22.068010 kernel: io scheduler kyber registered Nov 1 00:24:22.068017 kernel: io scheduler bfq registered Nov 1 00:24:22.068027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:24:22.068034 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:24:22.068041 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:24:22.068048 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:24:22.068055 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:24:22.068062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:24:22.068068 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:24:22.068075 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:24:22.068394 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:24:22.068409 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:24:22.068527 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:24:22.068644 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:24:21 UTC (1761956661) Nov 1 00:24:22.068799 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:24:22.068814 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:24:22.068821 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:24:22.068828 kernel: Segment Routing with IPv6 Nov 1 00:24:22.068835 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:24:22.068847 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:24:22.068854 kernel: Key type dns_resolver registered Nov 1 00:24:22.068861 kernel: IPI shorthand broadcast: enabled Nov 1 00:24:22.068868 kernel: sched_clock: Marking stable (937006278, 364148296)->(1437312286, -136157712) Nov 1 00:24:22.068875 kernel: registered taskstats version 1 Nov 1 00:24:22.068882 kernel: Loading compiled-in X.509 certificates Nov 1 00:24:22.068889 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:24:22.068895 kernel: Key type .fscrypt registered Nov 1 00:24:22.068902 kernel: Key type fscrypt-provisioning registered Nov 1 00:24:22.068912 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:24:22.068919 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:24:22.068926 kernel: ima: No architecture policies found Nov 1 00:24:22.068932 kernel: clk: Disabling unused clocks Nov 1 00:24:22.068939 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:24:22.068946 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:24:22.068953 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:24:22.068960 kernel: Run /init as init process Nov 1 00:24:22.068967 kernel: with arguments: Nov 1 00:24:22.068976 kernel: /init Nov 1 00:24:22.068983 kernel: with environment: Nov 1 00:24:22.068990 kernel: HOME=/ Nov 1 00:24:22.068997 kernel: TERM=linux Nov 1 00:24:22.069005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:24:22.069014 systemd[1]: Detected virtualization kvm. Nov 1 00:24:22.069022 systemd[1]: Detected architecture x86-64. Nov 1 00:24:22.069032 systemd[1]: Running in initrd. Nov 1 00:24:22.069039 systemd[1]: No hostname configured, using default hostname. Nov 1 00:24:22.069046 systemd[1]: Hostname set to . Nov 1 00:24:22.069054 systemd[1]: Initializing machine ID from random generator. Nov 1 00:24:22.069061 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:24:22.069069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:24:22.069091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:24:22.069102 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:24:22.069110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:24:22.069118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:24:22.069126 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:24:22.069135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:24:22.069142 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:24:22.069153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:24:22.069161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:24:22.069168 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:24:22.069176 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:24:22.069183 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:24:22.069191 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:24:22.069198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:24:22.069205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:24:22.069213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:24:22.069223 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:24:22.069230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:24:22.069441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:24:22.069455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:24:22.069463 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:24:22.069471 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:24:22.069479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:24:22.069486 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:24:22.069498 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:24:22.069506 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:24:22.069513 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:24:22.069543 systemd-journald[177]: Collecting audit messages is disabled. Nov 1 00:24:22.069564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:24:22.069572 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:24:22.069583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:24:22.069591 systemd-journald[177]: Journal started Nov 1 00:24:22.069610 systemd-journald[177]: Runtime Journal (/run/log/journal/ae5889afe154496db65f799d4b068e13) is 8.0M, max 78.3M, 70.3M free. Nov 1 00:24:22.077462 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:24:22.078516 systemd-modules-load[178]: Inserted module 'overlay' Nov 1 00:24:22.079221 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:24:22.094895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:24:22.208045 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:24:22.208066 kernel: Bridge firewalling registered Nov 1 00:24:22.109852 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:24:22.114879 systemd-modules-load[178]: Inserted module 'br_netfilter' Nov 1 00:24:22.210924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:24:22.214397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:22.227287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:24:22.234917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:24:22.236877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:24:22.264200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:24:22.265786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:24:22.277915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:24:22.286902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:24:22.291176 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:22.295498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:24:22.305944 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:24:22.324288 dracut-cmdline[212]: dracut-dracut-053 Nov 1 00:24:22.327494 systemd-resolved[202]: Positive Trust Anchors: Nov 1 00:24:22.327515 systemd-resolved[202]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:24:22.333235 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:24:22.327544 systemd-resolved[202]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:24:22.337116 systemd-resolved[202]: Defaulting to hostname 'linux'. Nov 1 00:24:22.338686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:24:22.340325 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:24:22.429784 kernel: SCSI subsystem initialized Nov 1 00:24:22.439801 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:24:22.452764 kernel: iscsi: registered transport (tcp) Nov 1 00:24:22.476072 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:24:22.476174 kernel: QLogic iSCSI HBA Driver Nov 1 00:24:22.548134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:24:22.553950 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:24:22.585096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:24:22.585186 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:24:22.587403 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:24:22.634764 kernel: raid6: avx2x4 gen() 32578 MB/s Nov 1 00:24:22.652758 kernel: raid6: avx2x2 gen() 30568 MB/s Nov 1 00:24:22.671378 kernel: raid6: avx2x1 gen() 24949 MB/s Nov 1 00:24:22.671401 kernel: raid6: using algorithm avx2x4 gen() 32578 MB/s Nov 1 00:24:22.694123 kernel: raid6: .... xor() 5133 MB/s, rmw enabled Nov 1 00:24:22.694145 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:24:22.717783 kernel: xor: automatically using best checksumming function avx Nov 1 00:24:22.860798 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:24:22.880255 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:24:22.891950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:24:22.904297 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 1 00:24:22.909493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:24:22.919700 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:24:22.955767 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Nov 1 00:24:22.994509 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:24:23.002908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:24:23.082040 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:24:23.093041 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:24:23.110619 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:24:23.115488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:24:23.119069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:24:23.121150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:24:23.128892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:24:23.152008 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:24:23.198782 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:24:23.210752 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:24:23.211480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:24:23.213367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:23.217717 kernel: libata version 3.00 loaded. Nov 1 00:24:23.216578 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:24:23.218838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:24:23.226050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:23.228640 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:24:23.228658 kernel: AES CTR mode by8 optimization enabled Nov 1 00:24:23.230406 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:24:23.238236 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 1 00:24:23.381894 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:24:23.377174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:24:23.418765 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:24:23.427994 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:24:23.429188 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:24:23.434665 kernel: scsi host1: ahci Nov 1 00:24:23.436598 kernel: scsi host2: ahci Nov 1 00:24:23.441790 kernel: scsi host3: ahci Nov 1 00:24:23.444749 kernel: scsi host4: ahci Nov 1 00:24:23.446911 kernel: scsi host5: ahci Nov 1 00:24:23.461118 kernel: scsi host6: ahci Nov 1 00:24:23.461351 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Nov 1 00:24:23.461375 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Nov 1 00:24:23.461386 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Nov 1 00:24:23.461395 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Nov 1 00:24:23.461406 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Nov 1 00:24:23.461416 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Nov 1 00:24:23.564320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:23.570946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:24:23.588916 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:23.772749 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.772786 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.776120 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.776750 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.779754 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.785775 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.808032 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 1 00:24:23.808453 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 1 00:24:23.836617 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:24:23.836884 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 1 00:24:23.837084 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:24:23.845979 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:24:23.846006 kernel: GPT:9289727 != 167739391 Nov 1 00:24:23.848046 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:24:23.851690 kernel: GPT:9289727 != 167739391 Nov 1 00:24:23.851716 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:24:23.856035 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:23.858813 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:24:23.899047 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (448) Nov 1 00:24:23.904049 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 1 00:24:23.907000 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (451) Nov 1 00:24:23.922167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 1 00:24:23.934156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:24:23.939441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 1 00:24:23.942076 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 1 00:24:23.953908 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:24:23.959148 disk-uuid[569]: Primary Header is updated. Nov 1 00:24:23.959148 disk-uuid[569]: Secondary Entries is updated. Nov 1 00:24:23.959148 disk-uuid[569]: Secondary Header is updated. Nov 1 00:24:23.967777 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:23.976780 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:24.980606 disk-uuid[570]: The operation has completed successfully. Nov 1 00:24:24.981956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:25.032449 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:24:25.032610 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:24:25.052900 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:24:25.059584 sh[584]: Success Nov 1 00:24:25.077144 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:24:25.132608 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:24:25.142942 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:24:25.145025 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:24:25.164630 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:24:25.164659 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:25.168079 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:24:25.173759 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:24:25.173780 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:24:25.185806 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:24:25.188057 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:24:25.189521 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:24:25.194857 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:24:25.196607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:24:25.223119 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:25.223144 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:25.223156 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:24:25.232462 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:24:25.232494 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:24:25.250775 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:25.250483 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:24:25.259335 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:24:25.269838 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:24:25.356787 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:24:25.362695 ignition[706]: Ignition 2.19.0 Nov 1 00:24:25.363905 ignition[706]: Stage: fetch-offline Nov 1 00:24:25.364933 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:24:25.364109 ignition[706]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:25.366081 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:24:25.364126 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:25.364220 ignition[706]: parsed url from cmdline: "" Nov 1 00:24:25.364225 ignition[706]: no config URL provided Nov 1 00:24:25.364231 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:24:25.364241 ignition[706]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:24:25.364247 ignition[706]: failed to fetch config: resource requires networking Nov 1 00:24:25.364400 ignition[706]: Ignition finished successfully Nov 1 00:24:25.399081 systemd-networkd[769]: lo: Link UP Nov 1 00:24:25.399097 systemd-networkd[769]: lo: Gained carrier Nov 1 00:24:25.400784 systemd-networkd[769]: Enumeration completed Nov 1 00:24:25.400865 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:24:25.402168 systemd[1]: Reached target network.target - Network. Nov 1 00:24:25.402507 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:24:25.402511 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:24:25.404161 systemd-networkd[769]: eth0: Link UP Nov 1 00:24:25.404166 systemd-networkd[769]: eth0: Gained carrier Nov 1 00:24:25.404174 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:24:25.414892 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:24:25.431413 ignition[773]: Ignition 2.19.0 Nov 1 00:24:25.431427 ignition[773]: Stage: fetch Nov 1 00:24:25.431612 ignition[773]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:25.431624 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:25.431765 ignition[773]: parsed url from cmdline: "" Nov 1 00:24:25.431771 ignition[773]: no config URL provided Nov 1 00:24:25.431778 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:24:25.431791 ignition[773]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:24:25.431823 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 1 00:24:25.432038 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:24:25.632481 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 1 00:24:25.632713 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:24:26.033447 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 1 00:24:26.033594 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:24:26.136797 systemd-networkd[769]: eth0: DHCPv4 address 172.234.26.141/24, gateway 172.234.26.1 acquired from 23.213.15.222 Nov 1 00:24:26.833723 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 1 00:24:26.928122 ignition[773]: PUT result: OK Nov 1 00:24:26.928196 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 1 00:24:27.010019 systemd-networkd[769]: eth0: Gained IPv6LL Nov 1 00:24:27.042826 ignition[773]: GET result: OK Nov 1 00:24:27.042987 ignition[773]: parsing config with SHA512: 2a5170fc13a2b6d6e8a9a2235deeec77eff90ed6c4ff891354fba896c4f0f06dfebfac8d1a60fd9b4fb0073620e2cff7015f61c11e13e575a5662731321e2fcd Nov 1 00:24:27.049697 unknown[773]: fetched base config from "system" Nov 1 00:24:27.049718 unknown[773]: fetched base config from "system" Nov 1 00:24:27.050685 ignition[773]: fetch: fetch complete Nov 1 00:24:27.049754 unknown[773]: fetched user config from "akamai" Nov 1 00:24:27.050693 ignition[773]: fetch: fetch passed Nov 1 00:24:27.054135 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:24:27.050774 ignition[773]: Ignition finished successfully Nov 1 00:24:27.063928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:24:27.083432 ignition[781]: Ignition 2.19.0 Nov 1 00:24:27.083456 ignition[781]: Stage: kargs Nov 1 00:24:27.083671 ignition[781]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:27.083688 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:27.085523 ignition[781]: kargs: kargs passed Nov 1 00:24:27.089072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:24:27.085584 ignition[781]: Ignition finished successfully Nov 1 00:24:27.096889 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:24:27.119870 ignition[788]: Ignition 2.19.0 Nov 1 00:24:27.119884 ignition[788]: Stage: disks Nov 1 00:24:27.120046 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:27.120058 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:27.124515 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:24:27.120680 ignition[788]: disks: disks passed Nov 1 00:24:27.149314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:24:27.120723 ignition[788]: Ignition finished successfully Nov 1 00:24:27.151136 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:24:27.153442 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:24:27.155175 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:24:27.157392 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:24:27.165910 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:24:27.186984 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:24:27.190761 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:24:27.196889 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:24:27.285764 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:24:27.286242 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:24:27.287817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:24:27.296816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:24:27.299859 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:24:27.305111 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:24:27.305170 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:24:27.305201 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:24:27.318870 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Nov 1 00:24:27.318896 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:27.319529 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:24:27.327382 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:27.327409 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:24:27.334107 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:24:27.334131 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:24:27.340931 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:24:27.344773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:24:27.395160 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:24:27.402404 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:24:27.409490 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:24:27.415101 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:24:27.524270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:24:27.530874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:24:27.534470 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:24:27.543209 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:24:27.546127 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:27.578611 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:24:27.581021 ignition[917]: INFO : Ignition 2.19.0 Nov 1 00:24:27.581021 ignition[917]: INFO : Stage: mount Nov 1 00:24:27.583447 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:27.583447 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:27.583447 ignition[917]: INFO : mount: mount passed Nov 1 00:24:27.583447 ignition[917]: INFO : Ignition finished successfully Nov 1 00:24:27.584515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:24:27.592871 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:24:28.292895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:24:28.310783 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Nov 1 00:24:28.310828 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:28.315140 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:28.318817 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:24:28.328090 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:24:28.328193 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:24:28.333536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:24:28.366996 ignition[946]: INFO : Ignition 2.19.0 Nov 1 00:24:28.368168 ignition[946]: INFO : Stage: files Nov 1 00:24:28.368168 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:28.368168 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:28.371443 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:24:28.371443 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:24:28.371443 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:24:28.375287 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:24:28.376870 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:24:28.378166 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:24:28.378086 unknown[946]: wrote ssh authorized keys file for user: core Nov 1 00:24:28.380619 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:24:28.380619 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:24:28.579564 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:24:28.716116 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:28.735377 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:28.735377 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:28.735377 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:24:29.185418 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:24:29.466850 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:29.466850 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:24:29.472876 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:24:29.472876 ignition[946]: INFO : files: files passed Nov 1 00:24:29.472876 ignition[946]: INFO : Ignition finished successfully Nov 1 00:24:29.473537 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:24:29.511849 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:24:29.515899 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:24:29.519095 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:24:29.519412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:24:29.539784 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:24:29.539784 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:24:29.543019 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:24:29.542661 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:24:29.544555 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:24:29.552915 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:24:29.589882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:24:29.590028 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:24:29.592716 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:24:29.594704 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:24:29.597070 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:24:29.601937 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:24:29.621269 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:24:29.633942 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:24:29.645690 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:24:29.646980 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:24:29.649560 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:24:29.651714 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:24:29.651869 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:24:29.654401 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:24:29.655637 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:24:29.657611 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:24:29.659707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:24:29.661512 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:24:29.663757 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:24:29.665948 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:24:29.668657 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:24:29.670862 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:24:29.673083 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:24:29.674965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:24:29.675069 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:24:29.677844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:24:29.679179 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:24:29.681285 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:24:29.683160 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:24:29.685039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:24:29.685194 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:24:29.688127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:24:29.688256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:24:29.689615 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:24:29.689715 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:24:29.700028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:24:29.705926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:24:29.707800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:24:29.707940 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:24:29.714630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:24:29.715715 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:24:29.718838 ignition[999]: INFO : Ignition 2.19.0 Nov 1 00:24:29.718838 ignition[999]: INFO : Stage: umount Nov 1 00:24:29.718838 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:29.718838 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:29.727199 ignition[999]: INFO : umount: umount passed Nov 1 00:24:29.727199 ignition[999]: INFO : Ignition finished successfully Nov 1 00:24:29.720936 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:24:29.721054 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:24:29.726618 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:24:29.728872 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:24:29.731537 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:24:29.731608 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:24:29.733882 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:24:29.733934 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:24:29.735171 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:24:29.735221 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:24:29.736850 systemd[1]: Stopped target network.target - Network. Nov 1 00:24:29.739169 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:24:29.739225 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:24:29.742026 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:24:29.742954 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:24:29.743646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:24:29.745373 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:24:29.746177 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:24:29.748883 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:24:29.748933 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:24:29.775155 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:24:29.775290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:24:29.776956 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:24:29.777020 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:24:29.779420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:24:29.779515 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:24:29.782151 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:24:29.784244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:24:29.788495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:24:29.789140 systemd-networkd[769]: eth0: DHCPv6 lease lost Nov 1 00:24:29.789251 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:24:29.789584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:24:29.791476 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:24:29.791592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:24:29.795027 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:24:29.795092 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:24:29.799076 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:24:29.799151 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:24:29.809842 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:24:29.813222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:24:29.813285 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:24:29.815538 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:24:29.820628 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:24:29.820781 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:24:29.832068 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:24:29.833245 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:24:29.847442 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:24:29.847511 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:24:29.848652 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:24:29.848695 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:24:29.850715 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:24:29.850797 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:24:29.853890 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:24:29.853941 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:24:29.856285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:24:29.856340 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:29.865878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:24:29.868320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:24:29.868421 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:24:29.870495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:24:29.870549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:24:29.871579 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:24:29.871653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:24:29.873988 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:24:29.874057 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:24:29.876684 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:24:29.876775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:24:29.879163 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:24:29.879213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:24:29.881614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:24:29.881661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:29.884746 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:24:29.884898 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:24:29.886811 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:24:29.886925 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:24:29.890549 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:24:29.898914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:24:29.909878 systemd[1]: Switching root. Nov 1 00:24:29.939573 systemd-journald[177]: Journal stopped Nov 1 00:24:22.055805 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:24:22.055828 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:24:22.055836 kernel: BIOS-provided physical RAM map: Nov 1 00:24:22.055843 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 1 00:24:22.055848 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 1 00:24:22.055857 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:24:22.055864 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 1 00:24:22.055870 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 1 00:24:22.055875 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:24:22.055881 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:24:22.055887 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:24:22.055892 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:24:22.055898 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 1 00:24:22.055906 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:24:22.055913 kernel: NX (Execute Disable) protection: active Nov 1 00:24:22.055919 kernel: APIC: Static calls initialized Nov 1 00:24:22.055925 kernel: SMBIOS 2.8 present. Nov 1 00:24:22.055931 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 1 00:24:22.055938 kernel: Hypervisor detected: KVM Nov 1 00:24:22.055946 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:24:22.055952 kernel: kvm-clock: using sched offset of 5810140620 cycles Nov 1 00:24:22.055959 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:24:22.055965 kernel: tsc: Detected 1999.997 MHz processor Nov 1 00:24:22.055971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:24:22.055978 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:24:22.055984 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 1 00:24:22.055990 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:24:22.055997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:24:22.056005 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 1 00:24:22.056011 kernel: Using GB pages for direct mapping Nov 1 00:24:22.056018 kernel: ACPI: Early table checksum verification disabled Nov 1 00:24:22.056024 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 1 00:24:22.056030 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056036 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056042 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056048 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:24:22.056054 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056063 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056069 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056076 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:24:22.056274 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 1 00:24:22.056281 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 1 00:24:22.056287 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:24:22.056296 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 1 00:24:22.056303 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 1 00:24:22.056309 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 1 00:24:22.056315 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 1 00:24:22.056322 kernel: No NUMA configuration found Nov 1 00:24:22.056328 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 1 00:24:22.056334 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Nov 1 00:24:22.056341 kernel: Zone ranges: Nov 1 00:24:22.056350 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:24:22.056356 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:24:22.056362 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:24:22.056369 kernel: Movable zone start for each node Nov 1 00:24:22.056375 kernel: Early memory node ranges Nov 1 00:24:22.056381 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:24:22.056388 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 1 00:24:22.056394 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 1 00:24:22.056400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 1 00:24:22.056409 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:24:22.056415 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:24:22.056422 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 1 00:24:22.056428 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:24:22.056435 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:24:22.056441 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:24:22.056448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:24:22.056454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:24:22.056461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:24:22.056470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:24:22.056477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:24:22.056483 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:24:22.056489 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:24:22.056496 kernel: TSC deadline timer available Nov 1 00:24:22.056502 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:24:22.056508 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:24:22.056515 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:24:22.056521 kernel: kvm-guest: setup PV sched yield Nov 1 00:24:22.056527 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:24:22.056536 kernel: Booting paravirtualized kernel on KVM Nov 1 00:24:22.056543 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:24:22.056549 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:24:22.056556 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:24:22.056562 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:24:22.056569 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:24:22.056575 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:24:22.056582 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:24:22.056589 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:24:22.056598 kernel: random: crng init done Nov 1 00:24:22.056604 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:24:22.056611 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:24:22.056617 kernel: Fallback order for Node 0: 0 Nov 1 00:24:22.056624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 1 00:24:22.056630 kernel: Policy zone: Normal Nov 1 00:24:22.056636 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:24:22.056643 kernel: software IO TLB: area num 2. Nov 1 00:24:22.056652 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 227308K reserved, 0K cma-reserved) Nov 1 00:24:22.056659 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:24:22.056665 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:24:22.056672 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:24:22.056678 kernel: Dynamic Preempt: voluntary Nov 1 00:24:22.056685 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:24:22.056692 kernel: rcu: RCU event tracing is enabled. Nov 1 00:24:22.056699 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:24:22.056705 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:24:22.056715 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:24:22.056721 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:24:22.056759 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:24:22.056769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:24:22.056776 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:24:22.056782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:24:22.056789 kernel: Console: colour VGA+ 80x25 Nov 1 00:24:22.056795 kernel: printk: console [tty0] enabled Nov 1 00:24:22.056801 kernel: printk: console [ttyS0] enabled Nov 1 00:24:22.056813 kernel: ACPI: Core revision 20230628 Nov 1 00:24:22.056820 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:24:22.056826 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:24:22.056833 kernel: x2apic enabled Nov 1 00:24:22.056850 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:24:22.056860 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:24:22.056866 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:24:22.056873 kernel: kvm-guest: setup PV IPIs Nov 1 00:24:22.056880 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:24:22.056886 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:24:22.056893 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 1 00:24:22.056900 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:24:22.056910 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:24:22.056916 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:24:22.056923 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:24:22.056930 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:24:22.056940 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:24:22.056946 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:24:22.056953 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:24:22.056960 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:24:22.056967 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:24:22.056974 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:24:22.056981 kernel: active return thunk: srso_alias_return_thunk Nov 1 00:24:22.056988 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:24:22.056994 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 1 00:24:22.057004 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:24:22.057011 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:24:22.057018 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:24:22.057025 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:24:22.057032 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 00:24:22.057038 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:24:22.057045 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 1 00:24:22.057052 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 1 00:24:22.057061 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:24:22.057068 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:24:22.057075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:24:22.057082 kernel: landlock: Up and running. Nov 1 00:24:22.057088 kernel: SELinux: Initializing. Nov 1 00:24:22.057095 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.057102 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.057109 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 1 00:24:22.057115 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:24:22.057126 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:24:22.057133 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:24:22.057139 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:24:22.057146 kernel: ... version: 0 Nov 1 00:24:22.057153 kernel: ... bit width: 48 Nov 1 00:24:22.057159 kernel: ... generic registers: 6 Nov 1 00:24:22.057166 kernel: ... value mask: 0000ffffffffffff Nov 1 00:24:22.057173 kernel: ... max period: 00007fffffffffff Nov 1 00:24:22.057180 kernel: ... fixed-purpose events: 0 Nov 1 00:24:22.057189 kernel: ... event mask: 000000000000003f Nov 1 00:24:22.057196 kernel: signal: max sigframe size: 3376 Nov 1 00:24:22.057203 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:24:22.057210 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:24:22.057217 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:24:22.057224 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:24:22.057230 kernel: .... node #0, CPUs: #1 Nov 1 00:24:22.057237 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:24:22.057244 kernel: smpboot: Max logical packages: 1 Nov 1 00:24:22.057250 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 1 00:24:22.057459 kernel: devtmpfs: initialized Nov 1 00:24:22.057466 kernel: x86/mm: Memory block size: 128MB Nov 1 00:24:22.057473 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:24:22.057480 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:24:22.057486 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:24:22.057493 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:24:22.057499 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:24:22.057506 kernel: audit: type=2000 audit(1761956660.757:1): state=initialized audit_enabled=0 res=1 Nov 1 00:24:22.057513 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:24:22.057522 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:24:22.057529 kernel: cpuidle: using governor menu Nov 1 00:24:22.057536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:24:22.057542 kernel: dca service started, version 1.12.1 Nov 1 00:24:22.057549 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:24:22.057556 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:24:22.057563 kernel: PCI: Using configuration type 1 for base access Nov 1 00:24:22.057569 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:24:22.057580 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:24:22.057587 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:24:22.057593 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:24:22.057600 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:24:22.057607 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:24:22.057613 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:24:22.057620 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:24:22.057627 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:24:22.057633 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:24:22.057643 kernel: ACPI: Interpreter enabled Nov 1 00:24:22.057650 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:24:22.057657 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:24:22.057664 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:24:22.057670 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:24:22.057677 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:24:22.057684 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:24:22.058921 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:24:22.059072 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:24:22.059204 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:24:22.059214 kernel: PCI host bridge to bus 0000:00 Nov 1 00:24:22.059344 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:24:22.059510 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:24:22.059636 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:24:22.059797 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 00:24:22.059928 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:24:22.060043 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 1 00:24:22.060157 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:24:22.060302 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:24:22.060440 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:24:22.060568 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:24:22.060694 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:24:22.062885 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:24:22.063023 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:24:22.063170 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:24:22.063301 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:24:22.063431 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:24:22.063558 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:24:22.063692 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:24:22.063870 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:24:22.064004 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:24:22.064305 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:24:22.064432 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:24:22.064567 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:24:22.064693 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:24:22.064892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:24:22.065026 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 1 00:24:22.065152 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:24:22.065288 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:24:22.065414 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:24:22.065424 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:24:22.065431 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:24:22.065443 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:24:22.065450 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:24:22.065457 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:24:22.065464 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:24:22.065470 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:24:22.065477 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:24:22.065484 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:24:22.065491 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:24:22.065498 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:24:22.065507 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:24:22.065515 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:24:22.065521 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:24:22.065528 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:24:22.065535 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:24:22.065542 kernel: iommu: Default domain type: Translated Nov 1 00:24:22.065549 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:24:22.065556 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:24:22.065562 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:24:22.065572 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 1 00:24:22.065578 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 1 00:24:22.065704 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:24:22.065887 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:24:22.066020 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:24:22.066030 kernel: vgaarb: loaded Nov 1 00:24:22.066038 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:24:22.066044 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:24:22.066051 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:24:22.066064 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:24:22.066071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:24:22.066078 kernel: pnp: PnP ACPI init Nov 1 00:24:22.066212 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:24:22.066223 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:24:22.066230 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:24:22.066437 kernel: NET: Registered PF_INET protocol family Nov 1 00:24:22.066444 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:24:22.066454 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:24:22.066462 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:24:22.066468 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:24:22.066475 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:24:22.066482 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:24:22.066489 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.066496 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:24:22.066503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:24:22.066510 kernel: NET: Registered PF_XDP protocol family Nov 1 00:24:22.066632 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:24:22.067137 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:24:22.067454 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:24:22.067569 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 00:24:22.067795 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:24:22.067933 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 1 00:24:22.067944 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:24:22.067951 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:24:22.067963 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 1 00:24:22.067970 kernel: Initialise system trusted keyrings Nov 1 00:24:22.067977 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:24:22.067984 kernel: Key type asymmetric registered Nov 1 00:24:22.067990 kernel: Asymmetric key parser 'x509' registered Nov 1 00:24:22.067997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:24:22.068004 kernel: io scheduler mq-deadline registered Nov 1 00:24:22.068010 kernel: io scheduler kyber registered Nov 1 00:24:22.068017 kernel: io scheduler bfq registered Nov 1 00:24:22.068027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:24:22.068034 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:24:22.068041 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:24:22.068048 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:24:22.068055 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:24:22.068062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:24:22.068068 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:24:22.068075 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:24:22.068394 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:24:22.068409 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:24:22.068527 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:24:22.068644 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:24:21 UTC (1761956661) Nov 1 00:24:22.068799 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:24:22.068814 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:24:22.068821 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:24:22.068828 kernel: Segment Routing with IPv6 Nov 1 00:24:22.068835 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:24:22.068847 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:24:22.068854 kernel: Key type dns_resolver registered Nov 1 00:24:22.068861 kernel: IPI shorthand broadcast: enabled Nov 1 00:24:22.068868 kernel: sched_clock: Marking stable (937006278, 364148296)->(1437312286, -136157712) Nov 1 00:24:22.068875 kernel: registered taskstats version 1 Nov 1 00:24:22.068882 kernel: Loading compiled-in X.509 certificates Nov 1 00:24:22.068889 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:24:22.068895 kernel: Key type .fscrypt registered Nov 1 00:24:22.068902 kernel: Key type fscrypt-provisioning registered Nov 1 00:24:22.068912 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:24:22.068919 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:24:22.068926 kernel: ima: No architecture policies found Nov 1 00:24:22.068932 kernel: clk: Disabling unused clocks Nov 1 00:24:22.068939 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:24:22.068946 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:24:22.068953 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:24:22.068960 kernel: Run /init as init process Nov 1 00:24:22.068967 kernel: with arguments: Nov 1 00:24:22.068976 kernel: /init Nov 1 00:24:22.068983 kernel: with environment: Nov 1 00:24:22.068990 kernel: HOME=/ Nov 1 00:24:22.068997 kernel: TERM=linux Nov 1 00:24:22.069005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:24:22.069014 systemd[1]: Detected virtualization kvm. Nov 1 00:24:22.069022 systemd[1]: Detected architecture x86-64. Nov 1 00:24:22.069032 systemd[1]: Running in initrd. Nov 1 00:24:22.069039 systemd[1]: No hostname configured, using default hostname. Nov 1 00:24:22.069046 systemd[1]: Hostname set to . Nov 1 00:24:22.069054 systemd[1]: Initializing machine ID from random generator. Nov 1 00:24:22.069061 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:24:22.069069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:24:22.069091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:24:22.069102 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:24:22.069110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:24:22.069118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:24:22.069126 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:24:22.069135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:24:22.069142 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:24:22.069153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:24:22.069161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:24:22.069168 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:24:22.069176 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:24:22.069183 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:24:22.069191 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:24:22.069198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:24:22.069205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:24:22.069213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:24:22.069223 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:24:22.069230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:24:22.069441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:24:22.069455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:24:22.069463 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:24:22.069471 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:24:22.069479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:24:22.069486 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:24:22.069498 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:24:22.069506 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:24:22.069513 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:24:22.069543 systemd-journald[177]: Collecting audit messages is disabled. Nov 1 00:24:22.069564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:24:22.069572 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:24:22.069583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:24:22.069591 systemd-journald[177]: Journal started Nov 1 00:24:22.069610 systemd-journald[177]: Runtime Journal (/run/log/journal/ae5889afe154496db65f799d4b068e13) is 8.0M, max 78.3M, 70.3M free. Nov 1 00:24:22.077462 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:24:22.078516 systemd-modules-load[178]: Inserted module 'overlay' Nov 1 00:24:22.079221 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:24:22.094895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:24:22.208045 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:24:22.208066 kernel: Bridge firewalling registered Nov 1 00:24:22.109852 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:24:22.114879 systemd-modules-load[178]: Inserted module 'br_netfilter' Nov 1 00:24:22.210924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:24:22.214397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:22.227287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:24:22.234917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:24:22.236877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:24:22.264200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:24:22.265786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:24:22.277915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:24:22.286902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:24:22.291176 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:22.295498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:24:22.305944 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:24:22.324288 dracut-cmdline[212]: dracut-dracut-053 Nov 1 00:24:22.327494 systemd-resolved[202]: Positive Trust Anchors: Nov 1 00:24:22.327515 systemd-resolved[202]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:24:22.333235 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:24:22.327544 systemd-resolved[202]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:24:22.337116 systemd-resolved[202]: Defaulting to hostname 'linux'. Nov 1 00:24:22.338686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:24:22.340325 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:24:22.429784 kernel: SCSI subsystem initialized Nov 1 00:24:22.439801 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:24:22.452764 kernel: iscsi: registered transport (tcp) Nov 1 00:24:22.476072 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:24:22.476174 kernel: QLogic iSCSI HBA Driver Nov 1 00:24:22.548134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:24:22.553950 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:24:22.585096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:24:22.585186 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:24:22.587403 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:24:22.634764 kernel: raid6: avx2x4 gen() 32578 MB/s Nov 1 00:24:22.652758 kernel: raid6: avx2x2 gen() 30568 MB/s Nov 1 00:24:22.671378 kernel: raid6: avx2x1 gen() 24949 MB/s Nov 1 00:24:22.671401 kernel: raid6: using algorithm avx2x4 gen() 32578 MB/s Nov 1 00:24:22.694123 kernel: raid6: .... xor() 5133 MB/s, rmw enabled Nov 1 00:24:22.694145 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:24:22.717783 kernel: xor: automatically using best checksumming function avx Nov 1 00:24:22.860798 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:24:22.880255 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:24:22.891950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:24:22.904297 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 1 00:24:22.909493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:24:22.919700 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:24:22.955767 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Nov 1 00:24:22.994509 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:24:23.002908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:24:23.082040 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:24:23.093041 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:24:23.110619 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:24:23.115488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:24:23.119069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:24:23.121150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:24:23.128892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:24:23.152008 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:24:23.198782 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:24:23.210752 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:24:23.211480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:24:23.213367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:23.217717 kernel: libata version 3.00 loaded. Nov 1 00:24:23.216578 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:24:23.218838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:24:23.226050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:23.228640 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:24:23.228658 kernel: AES CTR mode by8 optimization enabled Nov 1 00:24:23.230406 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:24:23.238236 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 1 00:24:23.381894 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:24:23.377174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:24:23.418765 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:24:23.427994 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:24:23.429188 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:24:23.434665 kernel: scsi host1: ahci Nov 1 00:24:23.436598 kernel: scsi host2: ahci Nov 1 00:24:23.441790 kernel: scsi host3: ahci Nov 1 00:24:23.444749 kernel: scsi host4: ahci Nov 1 00:24:23.446911 kernel: scsi host5: ahci Nov 1 00:24:23.461118 kernel: scsi host6: ahci Nov 1 00:24:23.461351 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Nov 1 00:24:23.461375 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Nov 1 00:24:23.461386 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Nov 1 00:24:23.461395 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Nov 1 00:24:23.461406 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Nov 1 00:24:23.461416 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Nov 1 00:24:23.564320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:23.570946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:24:23.588916 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:23.772749 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.772786 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.776120 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.776750 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.779754 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.785775 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:24:23.808032 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 1 00:24:23.808453 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 1 00:24:23.836617 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:24:23.836884 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 1 00:24:23.837084 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:24:23.845979 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:24:23.846006 kernel: GPT:9289727 != 167739391 Nov 1 00:24:23.848046 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:24:23.851690 kernel: GPT:9289727 != 167739391 Nov 1 00:24:23.851716 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:24:23.856035 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:23.858813 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:24:23.899047 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (448) Nov 1 00:24:23.904049 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 1 00:24:23.907000 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (451) Nov 1 00:24:23.922167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 1 00:24:23.934156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:24:23.939441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 1 00:24:23.942076 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 1 00:24:23.953908 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:24:23.959148 disk-uuid[569]: Primary Header is updated. Nov 1 00:24:23.959148 disk-uuid[569]: Secondary Entries is updated. Nov 1 00:24:23.959148 disk-uuid[569]: Secondary Header is updated. Nov 1 00:24:23.967777 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:23.976780 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:24.980606 disk-uuid[570]: The operation has completed successfully. Nov 1 00:24:24.981956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:24:25.032449 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:24:25.032610 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:24:25.052900 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:24:25.059584 sh[584]: Success Nov 1 00:24:25.077144 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:24:25.132608 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:24:25.142942 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:24:25.145025 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:24:25.164630 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:24:25.164659 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:25.168079 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:24:25.173759 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:24:25.173780 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:24:25.185806 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:24:25.188057 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:24:25.189521 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:24:25.194857 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:24:25.196607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:24:25.223119 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:25.223144 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:25.223156 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:24:25.232462 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:24:25.232494 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:24:25.250775 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:25.250483 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:24:25.259335 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:24:25.269838 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:24:25.356787 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:24:25.362695 ignition[706]: Ignition 2.19.0 Nov 1 00:24:25.363905 ignition[706]: Stage: fetch-offline Nov 1 00:24:25.364933 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:24:25.364109 ignition[706]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:25.366081 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:24:25.364126 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:25.364220 ignition[706]: parsed url from cmdline: "" Nov 1 00:24:25.364225 ignition[706]: no config URL provided Nov 1 00:24:25.364231 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:24:25.364241 ignition[706]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:24:25.364247 ignition[706]: failed to fetch config: resource requires networking Nov 1 00:24:25.364400 ignition[706]: Ignition finished successfully Nov 1 00:24:25.399081 systemd-networkd[769]: lo: Link UP Nov 1 00:24:25.399097 systemd-networkd[769]: lo: Gained carrier Nov 1 00:24:25.400784 systemd-networkd[769]: Enumeration completed Nov 1 00:24:25.400865 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:24:25.402168 systemd[1]: Reached target network.target - Network. Nov 1 00:24:25.402507 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:24:25.402511 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:24:25.404161 systemd-networkd[769]: eth0: Link UP Nov 1 00:24:25.404166 systemd-networkd[769]: eth0: Gained carrier Nov 1 00:24:25.404174 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:24:25.414892 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:24:25.431413 ignition[773]: Ignition 2.19.0 Nov 1 00:24:25.431427 ignition[773]: Stage: fetch Nov 1 00:24:25.431612 ignition[773]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:25.431624 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:25.431765 ignition[773]: parsed url from cmdline: "" Nov 1 00:24:25.431771 ignition[773]: no config URL provided Nov 1 00:24:25.431778 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:24:25.431791 ignition[773]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:24:25.431823 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 1 00:24:25.432038 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:24:25.632481 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 1 00:24:25.632713 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:24:26.033447 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 1 00:24:26.033594 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:24:26.136797 systemd-networkd[769]: eth0: DHCPv4 address 172.234.26.141/24, gateway 172.234.26.1 acquired from 23.213.15.222 Nov 1 00:24:26.833723 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 1 00:24:26.928122 ignition[773]: PUT result: OK Nov 1 00:24:26.928196 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 1 00:24:27.010019 systemd-networkd[769]: eth0: Gained IPv6LL Nov 1 00:24:27.042826 ignition[773]: GET result: OK Nov 1 00:24:27.042987 ignition[773]: parsing config with SHA512: 2a5170fc13a2b6d6e8a9a2235deeec77eff90ed6c4ff891354fba896c4f0f06dfebfac8d1a60fd9b4fb0073620e2cff7015f61c11e13e575a5662731321e2fcd Nov 1 00:24:27.049697 unknown[773]: fetched base config from "system" Nov 1 00:24:27.049718 unknown[773]: fetched base config from "system" Nov 1 00:24:27.050685 ignition[773]: fetch: fetch complete Nov 1 00:24:27.049754 unknown[773]: fetched user config from "akamai" Nov 1 00:24:27.050693 ignition[773]: fetch: fetch passed Nov 1 00:24:27.054135 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:24:27.050774 ignition[773]: Ignition finished successfully Nov 1 00:24:27.063928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:24:27.083432 ignition[781]: Ignition 2.19.0 Nov 1 00:24:27.083456 ignition[781]: Stage: kargs Nov 1 00:24:27.083671 ignition[781]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:27.083688 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:27.085523 ignition[781]: kargs: kargs passed Nov 1 00:24:27.089072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:24:27.085584 ignition[781]: Ignition finished successfully Nov 1 00:24:27.096889 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:24:27.119870 ignition[788]: Ignition 2.19.0 Nov 1 00:24:27.119884 ignition[788]: Stage: disks Nov 1 00:24:27.120046 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:27.120058 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:27.124515 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:24:27.120680 ignition[788]: disks: disks passed Nov 1 00:24:27.149314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:24:27.120723 ignition[788]: Ignition finished successfully Nov 1 00:24:27.151136 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:24:27.153442 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:24:27.155175 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:24:27.157392 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:24:27.165910 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:24:27.186984 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:24:27.190761 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:24:27.196889 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:24:27.285764 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:24:27.286242 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:24:27.287817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:24:27.296816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:24:27.299859 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:24:27.305111 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:24:27.305170 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:24:27.305201 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:24:27.318870 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Nov 1 00:24:27.318896 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:27.319529 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:24:27.327382 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:27.327409 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:24:27.334107 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:24:27.334131 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:24:27.340931 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:24:27.344773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:24:27.395160 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:24:27.402404 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:24:27.409490 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:24:27.415101 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:24:27.524270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:24:27.530874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:24:27.534470 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:24:27.543209 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:24:27.546127 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:27.578611 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:24:27.581021 ignition[917]: INFO : Ignition 2.19.0 Nov 1 00:24:27.581021 ignition[917]: INFO : Stage: mount Nov 1 00:24:27.583447 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:27.583447 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:27.583447 ignition[917]: INFO : mount: mount passed Nov 1 00:24:27.583447 ignition[917]: INFO : Ignition finished successfully Nov 1 00:24:27.584515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:24:27.592871 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:24:28.292895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:24:28.310783 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Nov 1 00:24:28.310828 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:24:28.315140 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:24:28.318817 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:24:28.328090 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:24:28.328193 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:24:28.333536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:24:28.366996 ignition[946]: INFO : Ignition 2.19.0 Nov 1 00:24:28.368168 ignition[946]: INFO : Stage: files Nov 1 00:24:28.368168 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:28.368168 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:28.371443 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:24:28.371443 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:24:28.371443 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:24:28.375287 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:24:28.376870 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:24:28.378166 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:24:28.378086 unknown[946]: wrote ssh authorized keys file for user: core Nov 1 00:24:28.380619 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:24:28.380619 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:24:28.579564 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:24:28.716116 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:24:28.718148 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:28.735377 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:28.735377 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:28.735377 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:24:29.185418 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:24:29.466850 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:24:29.466850 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:24:29.472876 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:24:29.472876 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:24:29.472876 ignition[946]: INFO : files: files passed Nov 1 00:24:29.472876 ignition[946]: INFO : Ignition finished successfully Nov 1 00:24:29.473537 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:24:29.511849 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:24:29.515899 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:24:29.519095 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:24:29.519412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:24:29.539784 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:24:29.539784 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:24:29.543019 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:24:29.542661 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:24:29.544555 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:24:29.552915 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:24:29.589882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:24:29.590028 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:24:29.592716 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:24:29.594704 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:24:29.597070 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:24:29.601937 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:24:29.621269 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:24:29.633942 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:24:29.645690 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:24:29.646980 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:24:29.649560 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:24:29.651714 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:24:29.651869 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:24:29.654401 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:24:29.655637 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:24:29.657611 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:24:29.659707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:24:29.661512 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:24:29.663757 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:24:29.665948 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:24:29.668657 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:24:29.670862 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:24:29.673083 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:24:29.674965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:24:29.675069 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:24:29.677844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:24:29.679179 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:24:29.681285 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:24:29.683160 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:24:29.685039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:24:29.685194 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:24:29.688127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:24:29.688256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:24:29.689615 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:24:29.689715 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:24:29.700028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:24:29.705926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:24:29.707800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:24:29.707940 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:24:29.714630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:24:29.715715 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:24:29.718838 ignition[999]: INFO : Ignition 2.19.0 Nov 1 00:24:29.718838 ignition[999]: INFO : Stage: umount Nov 1 00:24:29.718838 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:24:29.718838 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 1 00:24:29.727199 ignition[999]: INFO : umount: umount passed Nov 1 00:24:29.727199 ignition[999]: INFO : Ignition finished successfully Nov 1 00:24:29.720936 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:24:29.721054 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:24:29.726618 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:24:29.728872 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:24:29.731537 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:24:29.731608 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:24:29.733882 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:24:29.733934 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:24:29.735171 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:24:29.735221 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:24:29.736850 systemd[1]: Stopped target network.target - Network. Nov 1 00:24:29.739169 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:24:29.739225 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:24:29.742026 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:24:29.742954 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:24:29.743646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:24:29.745373 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:24:29.746177 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:24:29.748883 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:24:29.748933 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:24:29.775155 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:24:29.775290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:24:29.776956 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:24:29.777020 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:24:29.779420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:24:29.779515 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:24:29.782151 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:24:29.784244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:24:29.788495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:24:29.789140 systemd-networkd[769]: eth0: DHCPv6 lease lost Nov 1 00:24:29.789251 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:24:29.789584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:24:29.791476 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:24:29.791592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:24:29.795027 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:24:29.795092 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:24:29.799076 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:24:29.799151 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:24:29.809842 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:24:29.813222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:24:29.813285 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:24:29.815538 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:24:29.820628 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:24:29.820781 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:24:29.832068 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:24:29.833245 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:24:29.847442 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:24:29.847511 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:24:29.848652 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:24:29.848695 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:24:29.850715 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:24:29.850797 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:24:29.853890 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:24:29.853941 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:24:29.856285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:24:29.856340 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:24:29.865878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:24:29.868320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:24:29.868421 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:24:29.870495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:24:29.870549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:24:29.871579 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:24:29.871653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:24:29.873988 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:24:29.874057 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:24:29.876684 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:24:29.876775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:24:29.879163 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:24:29.879213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:24:29.881614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:24:29.881661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:29.884746 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:24:29.884898 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:24:29.886811 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:24:29.886925 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:24:29.890549 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:24:29.898914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:24:29.909878 systemd[1]: Switching root. Nov 1 00:24:29.939573 systemd-journald[177]: Journal stopped Nov 1 00:24:31.214121 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Nov 1 00:24:31.214152 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:24:31.214164 kernel: SELinux: policy capability open_perms=1 Nov 1 00:24:31.214174 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:24:31.214187 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:24:31.214197 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:24:31.214209 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:24:31.214218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:24:31.214228 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:24:31.214237 kernel: audit: type=1403 audit(1761956670.086:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:24:31.214248 systemd[1]: Successfully loaded SELinux policy in 59.351ms. Nov 1 00:24:31.214261 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13ms. Nov 1 00:24:31.214272 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:24:31.214283 systemd[1]: Detected virtualization kvm. Nov 1 00:24:31.214294 systemd[1]: Detected architecture x86-64. Nov 1 00:24:31.214304 systemd[1]: Detected first boot. Nov 1 00:24:31.214318 systemd[1]: Initializing machine ID from random generator. Nov 1 00:24:31.214329 zram_generator::config[1041]: No configuration found. Nov 1 00:24:31.214340 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:24:31.214350 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:24:31.214538 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:24:31.214549 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:24:31.214560 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:24:31.214573 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:24:31.214584 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:24:31.214594 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:24:31.214605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:24:31.214617 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:24:31.214627 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:24:31.214638 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:24:31.214651 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:24:31.214661 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:24:31.214672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:24:31.214682 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:24:31.214693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:24:31.214703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:24:31.214714 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:24:31.214747 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:24:31.214770 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:24:31.214782 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:24:31.214797 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:24:31.214808 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:24:31.214819 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:24:31.214830 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:24:31.214840 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:24:31.214851 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:24:31.214865 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:24:31.214876 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:24:31.214888 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:24:31.214899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:24:31.214910 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:24:31.214924 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:24:31.214935 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:24:31.214946 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:24:31.214957 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:24:31.214968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:24:31.214979 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:24:31.214990 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:24:31.215000 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:24:31.215014 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:24:31.215026 systemd[1]: Reached target machines.target - Containers. Nov 1 00:24:31.215037 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:24:31.215048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:24:31.215059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:24:31.215070 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:24:31.215081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:24:31.215091 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:24:31.215105 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:24:31.215115 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:24:31.215127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:24:31.215137 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:24:31.215148 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:24:31.215159 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:24:31.215169 kernel: fuse: init (API version 7.39) Nov 1 00:24:31.215179 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:24:31.215193 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:24:31.215392 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:24:31.215402 kernel: ACPI: bus type drm_connector registered Nov 1 00:24:31.215412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:24:31.215423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:24:31.215434 kernel: loop: module loaded Nov 1 00:24:31.215444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:24:31.215455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:24:31.215465 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:24:31.215479 systemd[1]: Stopped verity-setup.service. Nov 1 00:24:31.215510 systemd-journald[1131]: Collecting audit messages is disabled. Nov 1 00:24:31.215531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:24:31.215542 systemd-journald[1131]: Journal started Nov 1 00:24:31.215565 systemd-journald[1131]: Runtime Journal (/run/log/journal/c168cd03081d40f2a568781630d5032f) is 8.0M, max 78.3M, 70.3M free. Nov 1 00:24:30.752292 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:24:30.779030 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 00:24:30.779677 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:24:31.224342 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:24:31.225258 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:24:31.226365 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:24:31.227441 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:24:31.228578 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:24:31.229673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:24:31.230821 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:24:31.232059 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:24:31.233579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:24:31.235550 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:24:31.235848 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:24:31.237214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:24:31.237472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:24:31.238860 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:24:31.239115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:24:31.240417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:24:31.240650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:24:31.242276 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:24:31.242714 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:24:31.244234 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:24:31.244478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:24:31.245962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:24:31.247299 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:24:31.248631 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:24:31.268160 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:24:31.300765 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:24:31.308174 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:24:31.309676 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:24:31.309806 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:24:31.311718 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:24:31.320857 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:24:31.330886 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:24:31.332006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:24:31.334826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:24:31.340690 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:24:31.341869 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:24:31.347933 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:24:31.350461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:24:31.357647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:24:31.370262 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:24:31.385180 systemd-journald[1131]: Time spent on flushing to /var/log/journal/c168cd03081d40f2a568781630d5032f is 73.300ms for 973 entries. Nov 1 00:24:31.385180 systemd-journald[1131]: System Journal (/var/log/journal/c168cd03081d40f2a568781630d5032f) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:24:31.491964 systemd-journald[1131]: Received client request to flush runtime journal. Nov 1 00:24:31.492002 kernel: loop0: detected capacity change from 0 to 140768 Nov 1 00:24:31.492017 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:24:31.492029 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:24:31.375904 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:24:31.382037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:24:31.383336 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:24:31.387934 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:24:31.389356 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:24:31.408213 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:24:31.430054 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:24:31.433017 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:24:31.442865 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:24:31.471993 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:24:31.477245 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:24:31.478960 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:24:31.497999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:24:31.500891 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:24:31.533136 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Nov 1 00:24:31.533154 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Nov 1 00:24:31.540665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:24:31.550249 kernel: loop2: detected capacity change from 0 to 219144 Nov 1 00:24:31.549958 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:24:31.600425 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:24:31.609145 kernel: loop3: detected capacity change from 0 to 8 Nov 1 00:24:31.616850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:24:31.639820 kernel: loop4: detected capacity change from 0 to 140768 Nov 1 00:24:31.656933 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 1 00:24:31.657309 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 1 00:24:31.667473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:24:31.669963 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 00:24:31.690132 kernel: loop6: detected capacity change from 0 to 219144 Nov 1 00:24:31.712753 kernel: loop7: detected capacity change from 0 to 8 Nov 1 00:24:31.715076 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 1 00:24:31.721122 (sd-merge)[1188]: Merged extensions into '/usr'. Nov 1 00:24:31.728269 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:24:31.728345 systemd[1]: Reloading... Nov 1 00:24:31.877718 zram_generator::config[1215]: No configuration found. Nov 1 00:24:31.946426 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:24:32.052064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:24:32.096542 systemd[1]: Reloading finished in 366 ms. Nov 1 00:24:32.131333 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:24:32.133584 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:24:32.136118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:24:32.148952 systemd[1]: Starting ensure-sysext.service... Nov 1 00:24:32.151895 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:24:32.163003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:24:32.168020 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:24:32.168035 systemd[1]: Reloading... Nov 1 00:24:32.207404 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:24:32.207958 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:24:32.210717 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:24:32.211181 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 1 00:24:32.212895 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 1 00:24:32.214513 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Nov 1 00:24:32.224361 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:24:32.224377 systemd-tmpfiles[1260]: Skipping /boot Nov 1 00:24:32.252898 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:24:32.252921 systemd-tmpfiles[1260]: Skipping /boot Nov 1 00:24:32.300838 zram_generator::config[1287]: No configuration found. Nov 1 00:24:32.481896 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:24:32.513778 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:24:32.521939 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:24:32.522410 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:24:32.553761 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:24:32.556763 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:24:32.563211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:24:32.606842 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:24:32.628784 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:24:32.633377 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:24:32.634127 systemd[1]: Reloading finished in 465 ms. Nov 1 00:24:32.648805 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1309) Nov 1 00:24:32.665711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:24:32.669377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:24:32.713344 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:24:32.717207 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:24:32.732883 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:24:32.743050 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:24:32.747943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:24:32.749199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:24:32.752574 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:24:32.761675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:24:32.767019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:24:32.771423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:24:32.773373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:24:32.781481 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:24:32.786250 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:24:32.794939 lvm[1368]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:24:32.808076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:24:32.816513 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:24:32.829984 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:24:32.837026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:24:32.838719 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:24:32.843095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:24:32.843295 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:24:32.846302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:24:32.846650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:24:32.848538 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:24:32.849825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:24:32.851945 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:24:32.857105 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:24:32.883545 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:24:32.889223 systemd[1]: Finished ensure-sysext.service. Nov 1 00:24:32.895305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:24:32.896665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:24:32.896981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:24:32.905127 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:24:32.907691 augenrules[1401]: No rules Nov 1 00:24:32.913907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:24:32.922056 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:24:32.922495 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:24:32.926020 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:24:32.933390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:24:32.934640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:24:32.942887 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:24:32.953878 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:24:32.954895 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:24:32.957314 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:24:32.960174 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:24:32.968319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:24:32.968595 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:24:32.971508 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:24:32.979959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:24:32.980818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:24:32.988169 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:24:32.991333 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:24:32.991778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:24:33.004685 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:24:33.005039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:24:33.116186 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:24:33.118943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:24:33.124628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:24:33.127890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:24:33.137971 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:24:33.139971 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:24:33.162786 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:24:33.164638 systemd-resolved[1383]: Positive Trust Anchors: Nov 1 00:24:33.165084 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:24:33.165129 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:24:33.173713 systemd-networkd[1380]: lo: Link UP Nov 1 00:24:33.173781 systemd-networkd[1380]: lo: Gained carrier Nov 1 00:24:33.176400 systemd-networkd[1380]: Enumeration completed Nov 1 00:24:33.176584 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:24:33.177039 systemd-resolved[1383]: Defaulting to hostname 'linux'. Nov 1 00:24:33.178346 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:24:33.179829 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:24:33.182931 systemd-networkd[1380]: eth0: Link UP Nov 1 00:24:33.183924 systemd-networkd[1380]: eth0: Gained carrier Nov 1 00:24:33.184037 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:24:33.185973 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:24:33.189070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:24:33.190201 systemd[1]: Reached target network.target - Network. Nov 1 00:24:33.191169 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:24:33.208506 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:24:33.209591 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:24:33.210808 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:24:33.211975 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:24:33.213098 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:24:33.214209 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:24:33.214258 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:24:33.215274 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:24:33.216631 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:24:33.217958 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:24:33.219126 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:24:33.220859 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:24:33.223872 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:24:33.234460 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:24:33.236202 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:24:33.237283 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:24:33.238193 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:24:33.239164 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:24:33.239218 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:24:33.240643 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:24:33.244958 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:24:33.253977 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:24:33.258838 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:24:33.264393 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:24:33.266896 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:24:33.273921 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:24:33.286914 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:24:33.292132 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:24:33.301948 jq[1440]: false Nov 1 00:24:33.294761 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:24:33.310022 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:24:33.311658 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:24:33.312920 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:24:33.322881 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:24:33.333972 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:24:33.337219 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:24:33.338054 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:24:33.363131 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:24:33.367167 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:24:33.383593 jq[1454]: true Nov 1 00:24:33.367425 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:24:33.384388 dbus-daemon[1439]: [system] SELinux support is enabled Nov 1 00:24:33.388022 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:24:33.392246 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:24:33.401909 update_engine[1452]: I20251101 00:24:33.398258 1452 main.cc:92] Flatcar Update Engine starting Nov 1 00:24:33.393087 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:24:33.398123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:24:33.398178 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:24:33.402354 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:24:33.407391 tar[1460]: linux-amd64/LICENSE Nov 1 00:24:33.407391 tar[1460]: linux-amd64/helm Nov 1 00:24:33.407637 update_engine[1452]: I20251101 00:24:33.402972 1452 update_check_scheduler.cc:74] Next update check in 10m55s Nov 1 00:24:33.407668 coreos-metadata[1438]: Nov 01 00:24:33.403 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 1 00:24:33.402379 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:24:33.416812 jq[1470]: true Nov 1 00:24:33.418235 extend-filesystems[1441]: Found loop4 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found loop5 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found loop6 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found loop7 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda1 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda2 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda3 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found usr Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda4 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda6 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda7 Nov 1 00:24:33.418235 extend-filesystems[1441]: Found sda9 Nov 1 00:24:33.418235 extend-filesystems[1441]: Checking size of /dev/sda9 Nov 1 00:24:33.473794 extend-filesystems[1441]: Resized partition /dev/sda9 Nov 1 00:24:33.430166 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:24:33.474876 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:24:33.444910 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:24:33.491233 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 1 00:24:33.548745 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1309) Nov 1 00:24:33.600796 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:24:33.605786 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:24:33.608434 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:24:33.623564 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:24:33.623997 systemd[1]: Starting sshkeys.service... Nov 1 00:24:33.627639 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:24:33.629513 systemd-logind[1448]: New seat seat0. Nov 1 00:24:33.634846 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:24:33.683604 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:24:33.693077 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:24:33.843640 coreos-metadata[1510]: Nov 01 00:24:33.842 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 1 00:24:33.857846 systemd-networkd[1380]: eth0: DHCPv4 address 172.234.26.141/24, gateway 172.234.26.1 acquired from 23.213.15.222 Nov 1 00:24:33.858082 dbus-daemon[1439]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1380 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:24:33.868547 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Nov 1 00:24:33.870772 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 00:24:33.895783 containerd[1464]: time="2025-11-01T00:24:33.893643389Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:24:33.907758 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 1 00:24:33.929974 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 00:24:33.929974 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 1 00:24:33.929974 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 1 00:24:33.942237 extend-filesystems[1441]: Resized filesystem in /dev/sda9 Nov 1 00:24:33.932064 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:24:33.932416 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:24:33.976130 containerd[1464]: time="2025-11-01T00:24:33.976067873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:24:33.982593 containerd[1464]: time="2025-11-01T00:24:33.981650441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:24:33.982593 containerd[1464]: time="2025-11-01T00:24:33.981697471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:24:33.982849 containerd[1464]: time="2025-11-01T00:24:33.981723431Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983057743Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983130224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983243484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983267304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983437874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983453894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983466584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983475844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:24:33.984751 containerd[1464]: time="2025-11-01T00:24:33.983573444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:24:33.986112 containerd[1464]: time="2025-11-01T00:24:33.986072548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:24:33.986790 containerd[1464]: time="2025-11-01T00:24:33.986294698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:24:33.986790 containerd[1464]: time="2025-11-01T00:24:33.986318818Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:24:33.986790 containerd[1464]: time="2025-11-01T00:24:33.986711069Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:24:33.986873 containerd[1464]: time="2025-11-01T00:24:33.986818539Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:24:33.996833 containerd[1464]: time="2025-11-01T00:24:33.996796804Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:24:33.996875 containerd[1464]: time="2025-11-01T00:24:33.996851654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:24:33.996952 containerd[1464]: time="2025-11-01T00:24:33.996876804Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:24:33.996952 containerd[1464]: time="2025-11-01T00:24:33.996906714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:24:33.996952 containerd[1464]: time="2025-11-01T00:24:33.996921694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:24:33.997110 containerd[1464]: time="2025-11-01T00:24:33.997079084Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:24:33.997577 containerd[1464]: time="2025-11-01T00:24:33.997540585Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:24:33.997724 containerd[1464]: time="2025-11-01T00:24:33.997689775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:24:33.997779 containerd[1464]: time="2025-11-01T00:24:33.997723025Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:24:33.997849 containerd[1464]: time="2025-11-01T00:24:33.997813506Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:24:33.997883 containerd[1464]: time="2025-11-01T00:24:33.997859786Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.997919 containerd[1464]: time="2025-11-01T00:24:33.997881696Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.997919 containerd[1464]: time="2025-11-01T00:24:33.997894686Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.997919 containerd[1464]: time="2025-11-01T00:24:33.997907686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.997971 containerd[1464]: time="2025-11-01T00:24:33.997924986Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.997971 containerd[1464]: time="2025-11-01T00:24:33.997938826Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.997971 containerd[1464]: time="2025-11-01T00:24:33.997950236Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.997971 containerd[1464]: time="2025-11-01T00:24:33.997963766Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:24:33.998043 containerd[1464]: time="2025-11-01T00:24:33.997983436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998043 containerd[1464]: time="2025-11-01T00:24:33.997997966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998043 containerd[1464]: time="2025-11-01T00:24:33.998012796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998043 containerd[1464]: time="2025-11-01T00:24:33.998025886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998043 containerd[1464]: time="2025-11-01T00:24:33.998039806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998138 containerd[1464]: time="2025-11-01T00:24:33.998086966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998138 containerd[1464]: time="2025-11-01T00:24:33.998109766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998138 containerd[1464]: time="2025-11-01T00:24:33.998122806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998138 containerd[1464]: time="2025-11-01T00:24:33.998136286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998151066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998162896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998175806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998187456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998202976Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998223846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998235436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998248 containerd[1464]: time="2025-11-01T00:24:33.998252646Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998311336Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998328176Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998338446Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998349406Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998358066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998615157Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998640817Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:24:33.998827 containerd[1464]: time="2025-11-01T00:24:33.998658277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:24:34.004567 containerd[1464]: time="2025-11-01T00:24:34.003055693Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:24:34.004567 containerd[1464]: time="2025-11-01T00:24:34.003120874Z" level=info msg="Connect containerd service" Nov 1 00:24:34.004567 containerd[1464]: time="2025-11-01T00:24:34.003167804Z" level=info msg="using legacy CRI server" Nov 1 00:24:34.004567 containerd[1464]: time="2025-11-01T00:24:34.003176624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:24:34.004567 containerd[1464]: time="2025-11-01T00:24:34.003507844Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:24:34.007795 containerd[1464]: time="2025-11-01T00:24:34.006318668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:24:34.008287 containerd[1464]: time="2025-11-01T00:24:34.007990261Z" level=info msg="Start subscribing containerd event" Nov 1 00:24:34.008287 containerd[1464]: time="2025-11-01T00:24:34.008046001Z" level=info msg="Start recovering state" Nov 1 00:24:34.008287 containerd[1464]: time="2025-11-01T00:24:34.008108851Z" level=info msg="Start event monitor" Nov 1 00:24:34.008287 containerd[1464]: time="2025-11-01T00:24:34.008120271Z" level=info msg="Start snapshots syncer" Nov 1 00:24:34.008287 containerd[1464]: time="2025-11-01T00:24:34.008129031Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:24:34.008287 containerd[1464]: time="2025-11-01T00:24:34.008136391Z" level=info msg="Start streaming server" Nov 1 00:24:34.009974 containerd[1464]: time="2025-11-01T00:24:34.008583402Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:24:34.009974 containerd[1464]: time="2025-11-01T00:24:34.008644312Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:24:34.008779 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:24:34.011564 containerd[1464]: time="2025-11-01T00:24:34.010171704Z" level=info msg="containerd successfully booted in 0.119926s" Nov 1 00:24:34.026904 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:24:34.027095 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 00:24:34.546556 systemd-resolved[1383]: Clock change detected. Flushing caches. Nov 1 00:24:34.546842 systemd-timesyncd[1411]: Contacted time server 23.95.49.216:123 (0.flatcar.pool.ntp.org). Nov 1 00:24:34.547404 dbus-daemon[1439]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1515 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:24:34.546912 systemd-timesyncd[1411]: Initial clock synchronization to Sat 2025-11-01 00:24:34.546482 UTC. Nov 1 00:24:34.563714 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 00:24:34.594705 polkitd[1522]: Started polkitd version 121 Nov 1 00:24:34.609881 polkitd[1522]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:24:34.610012 polkitd[1522]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:24:34.615227 polkitd[1522]: Finished loading, compiling and executing 2 rules Nov 1 00:24:34.617554 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:24:34.617858 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 00:24:34.620809 polkitd[1522]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:24:34.649693 systemd-resolved[1383]: System hostname changed to '172-234-26-141'. Nov 1 00:24:34.649880 systemd-hostnamed[1515]: Hostname set to <172-234-26-141> (transient) Nov 1 00:24:34.786986 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:24:34.818412 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:24:34.831302 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:24:34.848118 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:24:34.848654 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:24:34.858211 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:24:34.873209 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:24:34.882368 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:24:34.891691 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:24:34.894162 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:24:34.906630 coreos-metadata[1438]: Nov 01 00:24:34.906 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 1 00:24:34.919222 tar[1460]: linux-amd64/README.md Nov 1 00:24:34.922351 systemd-networkd[1380]: eth0: Gained IPv6LL Nov 1 00:24:34.927136 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:24:34.939760 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:24:34.949247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:24:34.953436 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:24:34.956185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:24:34.988396 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:24:35.003341 coreos-metadata[1438]: Nov 01 00:24:35.003 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 1 00:24:35.184831 coreos-metadata[1438]: Nov 01 00:24:35.184 INFO Fetch successful Nov 1 00:24:35.185180 coreos-metadata[1438]: Nov 01 00:24:35.185 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 1 00:24:35.341426 coreos-metadata[1510]: Nov 01 00:24:35.341 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 1 00:24:35.431255 coreos-metadata[1510]: Nov 01 00:24:35.431 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 1 00:24:35.437575 coreos-metadata[1438]: Nov 01 00:24:35.437 INFO Fetch successful Nov 1 00:24:35.525483 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:24:35.528629 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:24:35.569703 coreos-metadata[1510]: Nov 01 00:24:35.569 INFO Fetch successful Nov 1 00:24:35.589775 update-ssh-keys[1586]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:24:35.591693 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:24:35.594385 systemd[1]: Finished sshkeys.service. Nov 1 00:24:35.855353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:24:35.856731 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:24:35.860659 systemd[1]: Startup finished in 1.082s (kernel) + 8.347s (initrd) + 5.344s (userspace) = 14.774s. Nov 1 00:24:35.897429 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:24:36.361535 kubelet[1594]: E1101 00:24:36.361479 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:24:36.365152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:24:36.365363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:24:37.127479 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:24:37.132455 systemd[1]: Started sshd@0-172.234.26.141:22-139.178.68.195:56702.service - OpenSSH per-connection server daemon (139.178.68.195:56702). Nov 1 00:24:37.460169 sshd[1606]: Accepted publickey for core from 139.178.68.195 port 56702 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:37.462262 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:37.471601 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:24:37.477436 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:24:37.480676 systemd-logind[1448]: New session 1 of user core. Nov 1 00:24:37.491612 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:24:37.498264 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:24:37.510764 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:37.610719 systemd[1610]: Queued start job for default target default.target. Nov 1 00:24:37.618343 systemd[1610]: Created slice app.slice - User Application Slice. Nov 1 00:24:37.618379 systemd[1610]: Reached target paths.target - Paths. Nov 1 00:24:37.618394 systemd[1610]: Reached target timers.target - Timers. Nov 1 00:24:37.619898 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:24:37.631252 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:24:37.631396 systemd[1610]: Reached target sockets.target - Sockets. Nov 1 00:24:37.631430 systemd[1610]: Reached target basic.target - Basic System. Nov 1 00:24:37.631473 systemd[1610]: Reached target default.target - Main User Target. Nov 1 00:24:37.631510 systemd[1610]: Startup finished in 114ms. Nov 1 00:24:37.631919 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:24:37.639168 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:24:37.892988 systemd[1]: Started sshd@1-172.234.26.141:22-139.178.68.195:56704.service - OpenSSH per-connection server daemon (139.178.68.195:56704). Nov 1 00:24:38.218547 sshd[1621]: Accepted publickey for core from 139.178.68.195 port 56704 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:38.220387 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:38.225142 systemd-logind[1448]: New session 2 of user core. Nov 1 00:24:38.236345 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:24:38.466474 sshd[1621]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:38.471260 systemd[1]: sshd@1-172.234.26.141:22-139.178.68.195:56704.service: Deactivated successfully. Nov 1 00:24:38.473851 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:24:38.474617 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:24:38.475803 systemd-logind[1448]: Removed session 2. Nov 1 00:24:38.529252 systemd[1]: Started sshd@2-172.234.26.141:22-139.178.68.195:56708.service - OpenSSH per-connection server daemon (139.178.68.195:56708). Nov 1 00:24:38.849144 sshd[1628]: Accepted publickey for core from 139.178.68.195 port 56708 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:38.851292 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:38.857902 systemd-logind[1448]: New session 3 of user core. Nov 1 00:24:38.864190 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:24:39.089425 sshd[1628]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:39.093323 systemd[1]: sshd@2-172.234.26.141:22-139.178.68.195:56708.service: Deactivated successfully. Nov 1 00:24:39.094966 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:24:39.095630 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:24:39.096631 systemd-logind[1448]: Removed session 3. Nov 1 00:24:39.148733 systemd[1]: Started sshd@3-172.234.26.141:22-139.178.68.195:56714.service - OpenSSH per-connection server daemon (139.178.68.195:56714). Nov 1 00:24:39.497701 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 56714 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:39.499355 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:39.504177 systemd-logind[1448]: New session 4 of user core. Nov 1 00:24:39.515183 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:24:39.756580 sshd[1635]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:39.761569 systemd[1]: sshd@3-172.234.26.141:22-139.178.68.195:56714.service: Deactivated successfully. Nov 1 00:24:39.763791 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:24:39.764544 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:24:39.765572 systemd-logind[1448]: Removed session 4. Nov 1 00:24:39.814542 systemd[1]: Started sshd@4-172.234.26.141:22-139.178.68.195:56718.service - OpenSSH per-connection server daemon (139.178.68.195:56718). Nov 1 00:24:40.135803 sshd[1642]: Accepted publickey for core from 139.178.68.195 port 56718 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:40.137413 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:40.143697 systemd-logind[1448]: New session 5 of user core. Nov 1 00:24:40.151160 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:24:40.342869 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:24:40.343271 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:24:40.360879 sudo[1645]: pam_unix(sudo:session): session closed for user root Nov 1 00:24:40.411340 sshd[1642]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:40.416228 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:24:40.417318 systemd[1]: sshd@4-172.234.26.141:22-139.178.68.195:56718.service: Deactivated successfully. Nov 1 00:24:40.420019 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:24:40.420950 systemd-logind[1448]: Removed session 5. Nov 1 00:24:40.475336 systemd[1]: Started sshd@5-172.234.26.141:22-139.178.68.195:56720.service - OpenSSH per-connection server daemon (139.178.68.195:56720). Nov 1 00:24:40.811305 sshd[1650]: Accepted publickey for core from 139.178.68.195 port 56720 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:40.813365 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:40.818655 systemd-logind[1448]: New session 6 of user core. Nov 1 00:24:40.828192 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:24:41.009533 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:24:41.010260 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:24:41.014630 sudo[1654]: pam_unix(sudo:session): session closed for user root Nov 1 00:24:41.020773 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:24:41.021195 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:24:41.045247 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:24:41.048074 auditctl[1657]: No rules Nov 1 00:24:41.048669 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:24:41.049203 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:24:41.055277 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:24:41.085249 augenrules[1675]: No rules Nov 1 00:24:41.087527 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:24:41.089604 sudo[1653]: pam_unix(sudo:session): session closed for user root Nov 1 00:24:41.140984 sshd[1650]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:41.144712 systemd[1]: sshd@5-172.234.26.141:22-139.178.68.195:56720.service: Deactivated successfully. Nov 1 00:24:41.147339 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:24:41.148936 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:24:41.150697 systemd-logind[1448]: Removed session 6. Nov 1 00:24:41.215667 systemd[1]: Started sshd@6-172.234.26.141:22-139.178.68.195:56722.service - OpenSSH per-connection server daemon (139.178.68.195:56722). Nov 1 00:24:41.579430 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 56722 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:24:41.581440 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:41.587222 systemd-logind[1448]: New session 7 of user core. Nov 1 00:24:41.597154 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:24:41.780167 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:24:41.780531 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:24:42.054522 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:24:42.054583 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:24:42.315128 dockerd[1702]: time="2025-11-01T00:24:42.313589515Z" level=info msg="Starting up" Nov 1 00:24:42.392736 systemd[1]: var-lib-docker-metacopy\x2dcheck533796959-merged.mount: Deactivated successfully. Nov 1 00:24:42.414991 dockerd[1702]: time="2025-11-01T00:24:42.414800407Z" level=info msg="Loading containers: start." Nov 1 00:24:42.521068 kernel: Initializing XFRM netlink socket Nov 1 00:24:42.608042 systemd-networkd[1380]: docker0: Link UP Nov 1 00:24:42.633775 dockerd[1702]: time="2025-11-01T00:24:42.633709985Z" level=info msg="Loading containers: done." Nov 1 00:24:42.652165 dockerd[1702]: time="2025-11-01T00:24:42.652121013Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:24:42.652422 dockerd[1702]: time="2025-11-01T00:24:42.652234483Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:24:42.652422 dockerd[1702]: time="2025-11-01T00:24:42.652349333Z" level=info msg="Daemon has completed initialization" Nov 1 00:24:42.685052 dockerd[1702]: time="2025-11-01T00:24:42.684939562Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:24:42.685783 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:24:43.467772 containerd[1464]: time="2025-11-01T00:24:43.467717226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:24:44.467418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569044192.mount: Deactivated successfully. Nov 1 00:24:45.444983 containerd[1464]: time="2025-11-01T00:24:45.444189791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:45.444983 containerd[1464]: time="2025-11-01T00:24:45.444356541Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 1 00:24:45.446566 containerd[1464]: time="2025-11-01T00:24:45.446507684Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:45.452715 containerd[1464]: time="2025-11-01T00:24:45.452312623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:45.454690 containerd[1464]: time="2025-11-01T00:24:45.454623976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.98686065s" Nov 1 00:24:45.454690 containerd[1464]: time="2025-11-01T00:24:45.454674886Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:24:45.456008 containerd[1464]: time="2025-11-01T00:24:45.455477147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:24:46.615852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:24:46.623193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:24:46.788187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:24:46.797364 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:24:46.860834 kubelet[1911]: E1101 00:24:46.859973 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:24:46.865479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:24:46.865699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:24:47.021479 containerd[1464]: time="2025-11-01T00:24:47.021058806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:47.022455 containerd[1464]: time="2025-11-01T00:24:47.022352688Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 1 00:24:47.023260 containerd[1464]: time="2025-11-01T00:24:47.023204959Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:47.025989 containerd[1464]: time="2025-11-01T00:24:47.025941343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:47.027274 containerd[1464]: time="2025-11-01T00:24:47.027160335Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.571652478s" Nov 1 00:24:47.027274 containerd[1464]: time="2025-11-01T00:24:47.027191505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:24:47.028137 containerd[1464]: time="2025-11-01T00:24:47.028102836Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:24:48.318110 containerd[1464]: time="2025-11-01T00:24:48.318056641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:48.319466 containerd[1464]: time="2025-11-01T00:24:48.319378643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 1 00:24:48.320078 containerd[1464]: time="2025-11-01T00:24:48.319766274Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:48.322436 containerd[1464]: time="2025-11-01T00:24:48.322397008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:48.323629 containerd[1464]: time="2025-11-01T00:24:48.323510319Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.295368043s" Nov 1 00:24:48.323629 containerd[1464]: time="2025-11-01T00:24:48.323539669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:24:48.324403 containerd[1464]: time="2025-11-01T00:24:48.324377690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:24:49.845679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299963889.mount: Deactivated successfully. Nov 1 00:24:50.159387 containerd[1464]: time="2025-11-01T00:24:50.159199163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:50.160298 containerd[1464]: time="2025-11-01T00:24:50.160257924Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 1 00:24:50.160888 containerd[1464]: time="2025-11-01T00:24:50.160842715Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:50.162864 containerd[1464]: time="2025-11-01T00:24:50.162822028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:50.163732 containerd[1464]: time="2025-11-01T00:24:50.163692199Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.839280188s" Nov 1 00:24:50.163811 containerd[1464]: time="2025-11-01T00:24:50.163793929Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:24:50.164923 containerd[1464]: time="2025-11-01T00:24:50.164571661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:24:51.065873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388957531.mount: Deactivated successfully. Nov 1 00:24:51.899756 containerd[1464]: time="2025-11-01T00:24:51.899703673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:51.902224 containerd[1464]: time="2025-11-01T00:24:51.901999717Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 1 00:24:51.903058 containerd[1464]: time="2025-11-01T00:24:51.902903408Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:51.906062 containerd[1464]: time="2025-11-01T00:24:51.905708822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:51.907457 containerd[1464]: time="2025-11-01T00:24:51.906763094Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.742162863s" Nov 1 00:24:51.907457 containerd[1464]: time="2025-11-01T00:24:51.906794354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:24:51.908582 containerd[1464]: time="2025-11-01T00:24:51.908408296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:24:52.722062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338101145.mount: Deactivated successfully. Nov 1 00:24:52.726101 containerd[1464]: time="2025-11-01T00:24:52.725801402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:52.726679 containerd[1464]: time="2025-11-01T00:24:52.726632973Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 1 00:24:52.729128 containerd[1464]: time="2025-11-01T00:24:52.727252774Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:52.730494 containerd[1464]: time="2025-11-01T00:24:52.730451119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:52.731360 containerd[1464]: time="2025-11-01T00:24:52.731334580Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 822.893644ms" Nov 1 00:24:52.731452 containerd[1464]: time="2025-11-01T00:24:52.731434861Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:24:52.732580 containerd[1464]: time="2025-11-01T00:24:52.732546322Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:24:55.448547 containerd[1464]: time="2025-11-01T00:24:55.446971264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:55.448547 containerd[1464]: time="2025-11-01T00:24:55.448217775Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 1 00:24:55.448547 containerd[1464]: time="2025-11-01T00:24:55.448490126Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:55.452108 containerd[1464]: time="2025-11-01T00:24:55.452021921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:24:55.453803 containerd[1464]: time="2025-11-01T00:24:55.453762324Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.721172382s" Nov 1 00:24:55.453862 containerd[1464]: time="2025-11-01T00:24:55.453803014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:24:57.118402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:24:57.126306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:24:57.330305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:24:57.339557 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:24:57.384618 kubelet[2055]: E1101 00:24:57.384014 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:24:57.392375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:24:57.392569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:24:58.073631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:24:58.081332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:24:58.119810 systemd[1]: Reloading requested from client PID 2069 ('systemctl') (unit session-7.scope)... Nov 1 00:24:58.119823 systemd[1]: Reloading... Nov 1 00:24:58.305065 zram_generator::config[2109]: No configuration found. Nov 1 00:24:58.436436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:24:58.521576 systemd[1]: Reloading finished in 401 ms. Nov 1 00:24:58.583846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:24:58.588637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:24:58.590139 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:24:58.590405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:24:58.596313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:24:58.759519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:24:58.765372 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:24:58.822809 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:24:58.822809 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:24:58.823314 kubelet[2165]: I1101 00:24:58.822857 2165 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:24:59.143563 kubelet[2165]: I1101 00:24:59.143426 2165 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:24:59.143563 kubelet[2165]: I1101 00:24:59.143453 2165 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:24:59.147341 kubelet[2165]: I1101 00:24:59.147309 2165 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:24:59.147341 kubelet[2165]: I1101 00:24:59.147334 2165 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:24:59.147875 kubelet[2165]: I1101 00:24:59.147847 2165 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:24:59.155904 kubelet[2165]: E1101 00:24:59.155856 2165 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.26.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:24:59.161435 kubelet[2165]: I1101 00:24:59.161341 2165 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:24:59.168069 kubelet[2165]: E1101 00:24:59.167205 2165 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:24:59.168069 kubelet[2165]: I1101 00:24:59.167249 2165 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:24:59.172763 kubelet[2165]: I1101 00:24:59.172732 2165 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:24:59.174333 kubelet[2165]: I1101 00:24:59.174282 2165 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:24:59.174478 kubelet[2165]: I1101 00:24:59.174318 2165 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-26-141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:24:59.174478 kubelet[2165]: I1101 00:24:59.174473 2165 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:24:59.174607 kubelet[2165]: I1101 00:24:59.174488 2165 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:24:59.174607 kubelet[2165]: I1101 00:24:59.174586 2165 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:24:59.177343 kubelet[2165]: I1101 00:24:59.177307 2165 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:24:59.179299 kubelet[2165]: I1101 00:24:59.179262 2165 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:24:59.179299 kubelet[2165]: I1101 00:24:59.179292 2165 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:24:59.179370 kubelet[2165]: I1101 00:24:59.179338 2165 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:24:59.179370 kubelet[2165]: I1101 00:24:59.179357 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:24:59.185067 kubelet[2165]: I1101 00:24:59.184178 2165 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:24:59.185067 kubelet[2165]: I1101 00:24:59.184820 2165 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:24:59.185067 kubelet[2165]: I1101 00:24:59.184848 2165 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:24:59.185067 kubelet[2165]: W1101 00:24:59.184900 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:24:59.186929 kubelet[2165]: E1101 00:24:59.186889 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.26.141:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-26-141&limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:24:59.187097 kubelet[2165]: E1101 00:24:59.187020 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.26.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:24:59.188785 kubelet[2165]: I1101 00:24:59.188768 2165 server.go:1262] "Started kubelet" Nov 1 00:24:59.190113 kubelet[2165]: I1101 00:24:59.190098 2165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:24:59.195342 kubelet[2165]: E1101 00:24:59.193875 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.26.141:6443/api/v1/namespaces/default/events\": dial tcp 172.234.26.141:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-26-141.1873ba485cc398b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-26-141,UID:172-234-26-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-26-141,},FirstTimestamp:2025-11-01 00:24:59.188730036 +0000 UTC m=+0.417546817,LastTimestamp:2025-11-01 00:24:59.188730036 +0000 UTC m=+0.417546817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-26-141,}" Nov 1 00:24:59.195496 kubelet[2165]: I1101 00:24:59.195463 2165 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:24:59.197687 kubelet[2165]: I1101 00:24:59.197645 2165 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:24:59.203237 kubelet[2165]: I1101 00:24:59.203202 2165 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:24:59.203506 kubelet[2165]: E1101 00:24:59.203454 2165 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-26-141\" not found" Nov 1 00:24:59.203972 kubelet[2165]: I1101 00:24:59.203936 2165 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:24:59.204022 kubelet[2165]: I1101 00:24:59.203974 2165 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:24:59.204022 kubelet[2165]: I1101 00:24:59.204314 2165 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:24:59.204022 kubelet[2165]: I1101 00:24:59.204726 2165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:24:59.207418 kubelet[2165]: I1101 00:24:59.207384 2165 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:24:59.207495 kubelet[2165]: I1101 00:24:59.207468 2165 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:24:59.209663 kubelet[2165]: E1101 00:24:59.209633 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.26.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-26-141?timeout=10s\": dial tcp 172.234.26.141:6443: connect: connection refused" interval="200ms" Nov 1 00:24:59.209858 kubelet[2165]: E1101 00:24:59.209817 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.26.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:24:59.210258 kubelet[2165]: I1101 00:24:59.210079 2165 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:24:59.210258 kubelet[2165]: I1101 00:24:59.210189 2165 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:24:59.215174 kubelet[2165]: E1101 00:24:59.213597 2165 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:24:59.217295 kubelet[2165]: I1101 00:24:59.217272 2165 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:24:59.235550 kubelet[2165]: I1101 00:24:59.235509 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:24:59.237301 kubelet[2165]: I1101 00:24:59.237264 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:24:59.237301 kubelet[2165]: I1101 00:24:59.237296 2165 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:24:59.237388 kubelet[2165]: I1101 00:24:59.237339 2165 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:24:59.237679 kubelet[2165]: E1101 00:24:59.237643 2165 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:24:59.247007 kubelet[2165]: E1101 00:24:59.246642 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.26.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:24:59.260476 kubelet[2165]: I1101 00:24:59.260286 2165 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:24:59.260476 kubelet[2165]: I1101 00:24:59.260310 2165 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:24:59.260476 kubelet[2165]: I1101 00:24:59.260330 2165 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:24:59.262569 kubelet[2165]: I1101 00:24:59.262514 2165 policy_none.go:49] "None policy: Start" Nov 1 00:24:59.262569 kubelet[2165]: I1101 00:24:59.262539 2165 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:24:59.262569 kubelet[2165]: I1101 00:24:59.262554 2165 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:24:59.263480 kubelet[2165]: I1101 00:24:59.263460 2165 policy_none.go:47] "Start" Nov 1 00:24:59.269466 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:24:59.284277 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:24:59.298457 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:24:59.299851 kubelet[2165]: E1101 00:24:59.299819 2165 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:24:59.300020 kubelet[2165]: I1101 00:24:59.299991 2165 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:24:59.300095 kubelet[2165]: I1101 00:24:59.300013 2165 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:24:59.300650 kubelet[2165]: I1101 00:24:59.300613 2165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:24:59.302191 kubelet[2165]: E1101 00:24:59.302114 2165 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:24:59.302257 kubelet[2165]: E1101 00:24:59.302162 2165 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-26-141\" not found" Nov 1 00:24:59.358525 systemd[1]: Created slice kubepods-burstable-pod68a433feb0a7c09d48de2ed47656bb0c.slice - libcontainer container kubepods-burstable-pod68a433feb0a7c09d48de2ed47656bb0c.slice. Nov 1 00:24:59.367922 kubelet[2165]: E1101 00:24:59.367581 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:24:59.371588 systemd[1]: Created slice kubepods-burstable-pod7fd7db43d57c4031f94130845af0aac3.slice - libcontainer container kubepods-burstable-pod7fd7db43d57c4031f94130845af0aac3.slice. Nov 1 00:24:59.374239 kubelet[2165]: E1101 00:24:59.374215 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:24:59.377117 systemd[1]: Created slice kubepods-burstable-podda789d0f53b0d289258e622e55c4ce2f.slice - libcontainer container kubepods-burstable-podda789d0f53b0d289258e622e55c4ce2f.slice. Nov 1 00:24:59.378780 kubelet[2165]: E1101 00:24:59.378740 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:24:59.404206 kubelet[2165]: I1101 00:24:59.402360 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-234-26-141" Nov 1 00:24:59.404206 kubelet[2165]: E1101 00:24:59.402776 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.26.141:6443/api/v1/nodes\": dial tcp 172.234.26.141:6443: connect: connection refused" node="172-234-26-141" Nov 1 00:24:59.410777 kubelet[2165]: E1101 00:24:59.410736 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.26.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-26-141?timeout=10s\": dial tcp 172.234.26.141:6443: connect: connection refused" interval="400ms" Nov 1 00:24:59.508461 kubelet[2165]: I1101 00:24:59.508404 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-flexvolume-dir\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:24:59.508461 kubelet[2165]: I1101 00:24:59.508460 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-k8s-certs\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:24:59.508662 kubelet[2165]: I1101 00:24:59.508489 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da789d0f53b0d289258e622e55c4ce2f-kubeconfig\") pod \"kube-scheduler-172-234-26-141\" (UID: \"da789d0f53b0d289258e622e55c4ce2f\") " pod="kube-system/kube-scheduler-172-234-26-141" Nov 1 00:24:59.508662 kubelet[2165]: I1101 00:24:59.508531 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a433feb0a7c09d48de2ed47656bb0c-ca-certs\") pod \"kube-apiserver-172-234-26-141\" (UID: \"68a433feb0a7c09d48de2ed47656bb0c\") " pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:24:59.508662 kubelet[2165]: I1101 00:24:59.508554 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68a433feb0a7c09d48de2ed47656bb0c-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-26-141\" (UID: \"68a433feb0a7c09d48de2ed47656bb0c\") " pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:24:59.508662 kubelet[2165]: I1101 00:24:59.508573 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-kubeconfig\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:24:59.508662 kubelet[2165]: I1101 00:24:59.508589 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:24:59.508776 kubelet[2165]: I1101 00:24:59.508608 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68a433feb0a7c09d48de2ed47656bb0c-k8s-certs\") pod \"kube-apiserver-172-234-26-141\" (UID: \"68a433feb0a7c09d48de2ed47656bb0c\") " pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:24:59.508776 kubelet[2165]: I1101 00:24:59.508623 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-ca-certs\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:24:59.605072 kubelet[2165]: I1101 00:24:59.605001 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-234-26-141" Nov 1 00:24:59.605290 kubelet[2165]: E1101 00:24:59.605250 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.26.141:6443/api/v1/nodes\": dial tcp 172.234.26.141:6443: connect: connection refused" node="172-234-26-141" Nov 1 00:24:59.671203 kubelet[2165]: E1101 00:24:59.670421 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:24:59.671789 containerd[1464]: time="2025-11-01T00:24:59.671355890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-26-141,Uid:68a433feb0a7c09d48de2ed47656bb0c,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:59.676467 kubelet[2165]: E1101 00:24:59.676399 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:24:59.677237 containerd[1464]: time="2025-11-01T00:24:59.676939178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-26-141,Uid:7fd7db43d57c4031f94130845af0aac3,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:59.680963 kubelet[2165]: E1101 00:24:59.680900 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:24:59.681496 containerd[1464]: time="2025-11-01T00:24:59.681424675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-26-141,Uid:da789d0f53b0d289258e622e55c4ce2f,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:59.734775 kubelet[2165]: E1101 00:24:59.734663 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.26.141:6443/api/v1/namespaces/default/events\": dial tcp 172.234.26.141:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-26-141.1873ba485cc398b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-26-141,UID:172-234-26-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-26-141,},FirstTimestamp:2025-11-01 00:24:59.188730036 +0000 UTC m=+0.417546817,LastTimestamp:2025-11-01 00:24:59.188730036 +0000 UTC m=+0.417546817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-26-141,}" Nov 1 00:24:59.812481 kubelet[2165]: E1101 00:24:59.812409 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.26.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-26-141?timeout=10s\": dial tcp 172.234.26.141:6443: connect: connection refused" interval="800ms" Nov 1 00:25:00.007785 kubelet[2165]: I1101 00:25:00.007596 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-234-26-141" Nov 1 00:25:00.008624 kubelet[2165]: E1101 00:25:00.008061 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.26.141:6443/api/v1/nodes\": dial tcp 172.234.26.141:6443: connect: connection refused" node="172-234-26-141" Nov 1 00:25:00.354352 kubelet[2165]: E1101 00:25:00.354172 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.26.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:25:00.493532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297950132.mount: Deactivated successfully. Nov 1 00:25:00.499674 containerd[1464]: time="2025-11-01T00:25:00.499619842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:25:00.500896 containerd[1464]: time="2025-11-01T00:25:00.500836644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:25:00.501503 containerd[1464]: time="2025-11-01T00:25:00.501466375Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:25:00.502283 containerd[1464]: time="2025-11-01T00:25:00.502236496Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:25:00.504216 containerd[1464]: time="2025-11-01T00:25:00.504186559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:25:00.504853 containerd[1464]: time="2025-11-01T00:25:00.504748870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:25:00.505517 containerd[1464]: time="2025-11-01T00:25:00.505475291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:25:00.508048 containerd[1464]: time="2025-11-01T00:25:00.506918663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:25:00.511859 containerd[1464]: time="2025-11-01T00:25:00.511823510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 834.561741ms" Nov 1 00:25:00.512213 containerd[1464]: time="2025-11-01T00:25:00.512112271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 830.608136ms" Nov 1 00:25:00.512213 containerd[1464]: time="2025-11-01T00:25:00.512191191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 840.746931ms" Nov 1 00:25:00.530069 kubelet[2165]: E1101 00:25:00.528520 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.26.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:25:00.618704 kubelet[2165]: E1101 00:25:00.613699 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.26.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-26-141?timeout=10s\": dial tcp 172.234.26.141:6443: connect: connection refused" interval="1.6s" Nov 1 00:25:00.645236 kubelet[2165]: E1101 00:25:00.645166 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.26.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:25:00.653409 containerd[1464]: time="2025-11-01T00:25:00.653337603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:00.655383 containerd[1464]: time="2025-11-01T00:25:00.655311196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:00.655680 containerd[1464]: time="2025-11-01T00:25:00.655522116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:00.655734 containerd[1464]: time="2025-11-01T00:25:00.655664186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:00.662392 containerd[1464]: time="2025-11-01T00:25:00.660159443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:00.662905 containerd[1464]: time="2025-11-01T00:25:00.660971044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:00.662905 containerd[1464]: time="2025-11-01T00:25:00.662488266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:00.662905 containerd[1464]: time="2025-11-01T00:25:00.662838047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:00.668864 containerd[1464]: time="2025-11-01T00:25:00.667366044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:00.668864 containerd[1464]: time="2025-11-01T00:25:00.667400524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:00.668864 containerd[1464]: time="2025-11-01T00:25:00.667414274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:00.668864 containerd[1464]: time="2025-11-01T00:25:00.667531084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:00.705397 systemd[1]: Started cri-containerd-2509e3f6b537e48e9b70ed6fb0b0364e0cc14178611ff696b7d9b4d92711138e.scope - libcontainer container 2509e3f6b537e48e9b70ed6fb0b0364e0cc14178611ff696b7d9b4d92711138e. Nov 1 00:25:00.707660 systemd[1]: Started cri-containerd-78b576428cccf3217e408e04f3944b341d028ea02a37e9a3eea96d741c333f1c.scope - libcontainer container 78b576428cccf3217e408e04f3944b341d028ea02a37e9a3eea96d741c333f1c. Nov 1 00:25:00.714483 systemd[1]: Started cri-containerd-1b2766bfd6e027444595a743bffc929a2850a743b8864c212a10652b84f14741.scope - libcontainer container 1b2766bfd6e027444595a743bffc929a2850a743b8864c212a10652b84f14741. Nov 1 00:25:00.721748 kubelet[2165]: E1101 00:25:00.721721 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.26.141:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-26-141&limit=500&resourceVersion=0\": dial tcp 172.234.26.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:25:00.772173 containerd[1464]: time="2025-11-01T00:25:00.770808049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-26-141,Uid:da789d0f53b0d289258e622e55c4ce2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b576428cccf3217e408e04f3944b341d028ea02a37e9a3eea96d741c333f1c\"" Nov 1 00:25:00.772810 kubelet[2165]: E1101 00:25:00.772579 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:00.785812 containerd[1464]: time="2025-11-01T00:25:00.785746491Z" level=info msg="CreateContainer within sandbox \"78b576428cccf3217e408e04f3944b341d028ea02a37e9a3eea96d741c333f1c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:25:00.798417 containerd[1464]: time="2025-11-01T00:25:00.798371180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-26-141,Uid:7fd7db43d57c4031f94130845af0aac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2509e3f6b537e48e9b70ed6fb0b0364e0cc14178611ff696b7d9b4d92711138e\"" Nov 1 00:25:00.799382 kubelet[2165]: E1101 00:25:00.799179 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:00.805259 containerd[1464]: time="2025-11-01T00:25:00.804876410Z" level=info msg="CreateContainer within sandbox \"2509e3f6b537e48e9b70ed6fb0b0364e0cc14178611ff696b7d9b4d92711138e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:25:00.811354 kubelet[2165]: I1101 00:25:00.811330 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-234-26-141" Nov 1 00:25:00.811933 containerd[1464]: time="2025-11-01T00:25:00.811895230Z" level=info msg="CreateContainer within sandbox \"78b576428cccf3217e408e04f3944b341d028ea02a37e9a3eea96d741c333f1c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f854e9faf361f22898606ebf69bf07cf116d8e7ce9270efdd57d4bcc4287db2\"" Nov 1 00:25:00.812071 kubelet[2165]: E1101 00:25:00.812020 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.26.141:6443/api/v1/nodes\": dial tcp 172.234.26.141:6443: connect: connection refused" node="172-234-26-141" Nov 1 00:25:00.813268 containerd[1464]: time="2025-11-01T00:25:00.813246402Z" level=info msg="StartContainer for \"9f854e9faf361f22898606ebf69bf07cf116d8e7ce9270efdd57d4bcc4287db2\"" Nov 1 00:25:00.814838 containerd[1464]: time="2025-11-01T00:25:00.814807815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-26-141,Uid:68a433feb0a7c09d48de2ed47656bb0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b2766bfd6e027444595a743bffc929a2850a743b8864c212a10652b84f14741\"" Nov 1 00:25:00.816487 kubelet[2165]: E1101 00:25:00.816413 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:00.820400 containerd[1464]: time="2025-11-01T00:25:00.820356763Z" level=info msg="CreateContainer within sandbox \"1b2766bfd6e027444595a743bffc929a2850a743b8864c212a10652b84f14741\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:25:00.826652 containerd[1464]: time="2025-11-01T00:25:00.826628963Z" level=info msg="CreateContainer within sandbox \"2509e3f6b537e48e9b70ed6fb0b0364e0cc14178611ff696b7d9b4d92711138e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5b0301662dcafe18be26c6c61a857128e0890e562ef6df607e941a186c04e0c\"" Nov 1 00:25:00.827460 containerd[1464]: time="2025-11-01T00:25:00.827439534Z" level=info msg="StartContainer for \"b5b0301662dcafe18be26c6c61a857128e0890e562ef6df607e941a186c04e0c\"" Nov 1 00:25:00.835323 containerd[1464]: time="2025-11-01T00:25:00.835297046Z" level=info msg="CreateContainer within sandbox \"1b2766bfd6e027444595a743bffc929a2850a743b8864c212a10652b84f14741\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a01116b76c26398dd7ec59b9a9e9185b117785d1fb03e82e5e54c1233cec6891\"" Nov 1 00:25:00.835721 containerd[1464]: time="2025-11-01T00:25:00.835676666Z" level=info msg="StartContainer for \"a01116b76c26398dd7ec59b9a9e9185b117785d1fb03e82e5e54c1233cec6891\"" Nov 1 00:25:00.869178 systemd[1]: Started cri-containerd-9f854e9faf361f22898606ebf69bf07cf116d8e7ce9270efdd57d4bcc4287db2.scope - libcontainer container 9f854e9faf361f22898606ebf69bf07cf116d8e7ce9270efdd57d4bcc4287db2. Nov 1 00:25:00.881358 systemd[1]: Started cri-containerd-b5b0301662dcafe18be26c6c61a857128e0890e562ef6df607e941a186c04e0c.scope - libcontainer container b5b0301662dcafe18be26c6c61a857128e0890e562ef6df607e941a186c04e0c. Nov 1 00:25:00.891153 systemd[1]: Started cri-containerd-a01116b76c26398dd7ec59b9a9e9185b117785d1fb03e82e5e54c1233cec6891.scope - libcontainer container a01116b76c26398dd7ec59b9a9e9185b117785d1fb03e82e5e54c1233cec6891. Nov 1 00:25:00.966778 containerd[1464]: time="2025-11-01T00:25:00.966694593Z" level=info msg="StartContainer for \"b5b0301662dcafe18be26c6c61a857128e0890e562ef6df607e941a186c04e0c\" returns successfully" Nov 1 00:25:00.981104 containerd[1464]: time="2025-11-01T00:25:00.980075773Z" level=info msg="StartContainer for \"9f854e9faf361f22898606ebf69bf07cf116d8e7ce9270efdd57d4bcc4287db2\" returns successfully" Nov 1 00:25:00.987193 containerd[1464]: time="2025-11-01T00:25:00.987116023Z" level=info msg="StartContainer for \"a01116b76c26398dd7ec59b9a9e9185b117785d1fb03e82e5e54c1233cec6891\" returns successfully" Nov 1 00:25:01.261853 kubelet[2165]: E1101 00:25:01.261742 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:25:01.262278 kubelet[2165]: E1101 00:25:01.261920 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:01.267690 kubelet[2165]: E1101 00:25:01.267659 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:25:01.267830 kubelet[2165]: E1101 00:25:01.267804 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:01.277946 kubelet[2165]: E1101 00:25:01.277868 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:25:01.278961 kubelet[2165]: E1101 00:25:01.278927 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:02.280730 kubelet[2165]: E1101 00:25:02.280688 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:25:02.281304 kubelet[2165]: E1101 00:25:02.280827 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:02.282472 kubelet[2165]: E1101 00:25:02.282364 2165 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:25:02.282524 kubelet[2165]: E1101 00:25:02.282475 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:02.414632 kubelet[2165]: I1101 00:25:02.414438 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-234-26-141" Nov 1 00:25:03.137077 kubelet[2165]: E1101 00:25:03.136979 2165 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-26-141\" not found" node="172-234-26-141" Nov 1 00:25:03.185078 kubelet[2165]: I1101 00:25:03.185019 2165 apiserver.go:52] "Watching apiserver" Nov 1 00:25:03.207553 kubelet[2165]: I1101 00:25:03.207506 2165 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:25:03.227717 kubelet[2165]: I1101 00:25:03.227645 2165 kubelet_node_status.go:78] "Successfully registered node" node="172-234-26-141" Nov 1 00:25:03.304401 kubelet[2165]: I1101 00:25:03.304358 2165 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:03.309166 kubelet[2165]: E1101 00:25:03.309115 2165 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-26-141\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:03.309166 kubelet[2165]: I1101 00:25:03.309148 2165 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:03.310438 kubelet[2165]: E1101 00:25:03.310353 2165 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-26-141\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:03.310438 kubelet[2165]: I1101 00:25:03.310373 2165 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-26-141" Nov 1 00:25:03.312048 kubelet[2165]: E1101 00:25:03.311999 2165 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-26-141\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-26-141" Nov 1 00:25:04.680361 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:25:05.228015 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Nov 1 00:25:05.228069 systemd[1]: Reloading... Nov 1 00:25:05.368124 zram_generator::config[2506]: No configuration found. Nov 1 00:25:05.485183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:25:05.574430 systemd[1]: Reloading finished in 345 ms. Nov 1 00:25:05.625562 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:05.633336 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:25:05.633677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:05.640349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:05.798368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:05.804995 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:25:05.853824 kubelet[2546]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:25:05.853824 kubelet[2546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:25:05.854254 kubelet[2546]: I1101 00:25:05.853877 2546 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:25:05.859831 kubelet[2546]: I1101 00:25:05.859798 2546 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:25:05.859831 kubelet[2546]: I1101 00:25:05.859822 2546 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:25:05.859924 kubelet[2546]: I1101 00:25:05.859854 2546 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:25:05.859924 kubelet[2546]: I1101 00:25:05.859865 2546 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:25:05.860054 kubelet[2546]: I1101 00:25:05.860005 2546 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:25:05.861114 kubelet[2546]: I1101 00:25:05.860998 2546 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:25:05.864054 kubelet[2546]: I1101 00:25:05.863737 2546 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:25:05.867137 kubelet[2546]: E1101 00:25:05.867108 2546 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:25:05.867212 kubelet[2546]: I1101 00:25:05.867156 2546 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:25:05.870899 kubelet[2546]: I1101 00:25:05.870828 2546 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:25:05.871272 kubelet[2546]: I1101 00:25:05.871219 2546 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:25:05.871391 kubelet[2546]: I1101 00:25:05.871253 2546 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-26-141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:25:05.871490 kubelet[2546]: I1101 00:25:05.871394 2546 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:25:05.871490 kubelet[2546]: I1101 00:25:05.871406 2546 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:25:05.871490 kubelet[2546]: I1101 00:25:05.871432 2546 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:25:05.872832 kubelet[2546]: I1101 00:25:05.872793 2546 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:25:05.873045 kubelet[2546]: I1101 00:25:05.873009 2546 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:25:05.874126 kubelet[2546]: I1101 00:25:05.874087 2546 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:25:05.874257 kubelet[2546]: I1101 00:25:05.874212 2546 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:25:05.875825 kubelet[2546]: I1101 00:25:05.874880 2546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:25:05.882141 kubelet[2546]: I1101 00:25:05.882115 2546 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:25:05.882755 kubelet[2546]: I1101 00:25:05.882729 2546 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:25:05.882800 kubelet[2546]: I1101 00:25:05.882789 2546 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:25:05.885880 kubelet[2546]: I1101 00:25:05.885802 2546 server.go:1262] "Started kubelet" Nov 1 00:25:05.886407 kubelet[2546]: I1101 00:25:05.886365 2546 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:25:05.887545 kubelet[2546]: I1101 00:25:05.887515 2546 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:25:05.888331 kubelet[2546]: I1101 00:25:05.888278 2546 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:25:05.888425 kubelet[2546]: I1101 00:25:05.888329 2546 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:25:05.888700 kubelet[2546]: I1101 00:25:05.888648 2546 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:25:05.890106 kubelet[2546]: I1101 00:25:05.889822 2546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:25:05.894357 kubelet[2546]: I1101 00:25:05.893166 2546 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:25:05.897142 kubelet[2546]: I1101 00:25:05.897126 2546 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:25:05.899125 kubelet[2546]: I1101 00:25:05.899100 2546 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:25:05.899545 kubelet[2546]: I1101 00:25:05.899518 2546 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:25:05.899657 kubelet[2546]: I1101 00:25:05.899623 2546 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:25:05.899830 kubelet[2546]: I1101 00:25:05.899816 2546 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:25:05.907003 kubelet[2546]: I1101 00:25:05.906978 2546 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:25:05.913966 kubelet[2546]: E1101 00:25:05.913937 2546 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:25:05.914784 kubelet[2546]: I1101 00:25:05.914219 2546 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:25:05.916446 kubelet[2546]: I1101 00:25:05.916430 2546 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:25:05.916586 kubelet[2546]: I1101 00:25:05.916542 2546 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:25:05.916696 kubelet[2546]: I1101 00:25:05.916683 2546 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:25:05.916844 kubelet[2546]: E1101 00:25:05.916827 2546 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:25:05.965077 kubelet[2546]: I1101 00:25:05.965020 2546 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:25:05.965311 kubelet[2546]: I1101 00:25:05.965250 2546 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:25:05.965381 kubelet[2546]: I1101 00:25:05.965372 2546 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:25:05.965573 kubelet[2546]: I1101 00:25:05.965552 2546 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:25:05.965684 kubelet[2546]: I1101 00:25:05.965652 2546 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:25:05.965750 kubelet[2546]: I1101 00:25:05.965741 2546 policy_none.go:49] "None policy: Start" Nov 1 00:25:05.965807 kubelet[2546]: I1101 00:25:05.965798 2546 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:25:05.965851 kubelet[2546]: I1101 00:25:05.965842 2546 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:25:05.966000 kubelet[2546]: I1101 00:25:05.965985 2546 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:25:05.966094 kubelet[2546]: I1101 00:25:05.966082 2546 policy_none.go:47] "Start" Nov 1 00:25:05.972216 kubelet[2546]: E1101 00:25:05.972200 2546 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:25:05.972688 kubelet[2546]: I1101 00:25:05.972630 2546 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:25:05.973343 kubelet[2546]: I1101 00:25:05.973314 2546 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:25:05.975005 kubelet[2546]: I1101 00:25:05.974173 2546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:25:05.975988 kubelet[2546]: E1101 00:25:05.975952 2546 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:25:06.018171 kubelet[2546]: I1101 00:25:06.018153 2546 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:06.018383 kubelet[2546]: I1101 00:25:06.018357 2546 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:06.018477 kubelet[2546]: I1101 00:25:06.018237 2546 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-26-141" Nov 1 00:25:06.081664 kubelet[2546]: I1101 00:25:06.081646 2546 kubelet_node_status.go:75] "Attempting to register node" node="172-234-26-141" Nov 1 00:25:06.089897 kubelet[2546]: I1101 00:25:06.089842 2546 kubelet_node_status.go:124] "Node was previously registered" node="172-234-26-141" Nov 1 00:25:06.090012 kubelet[2546]: I1101 00:25:06.089926 2546 kubelet_node_status.go:78] "Successfully registered node" node="172-234-26-141" Nov 1 00:25:06.100766 kubelet[2546]: I1101 00:25:06.100730 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-flexvolume-dir\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:06.100846 kubelet[2546]: I1101 00:25:06.100780 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-k8s-certs\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:06.100846 kubelet[2546]: I1101 00:25:06.100819 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-kubeconfig\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:06.100846 kubelet[2546]: I1101 00:25:06.100840 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:06.100981 kubelet[2546]: I1101 00:25:06.100860 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68a433feb0a7c09d48de2ed47656bb0c-k8s-certs\") pod \"kube-apiserver-172-234-26-141\" (UID: \"68a433feb0a7c09d48de2ed47656bb0c\") " pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:06.100981 kubelet[2546]: I1101 00:25:06.100874 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fd7db43d57c4031f94130845af0aac3-ca-certs\") pod \"kube-controller-manager-172-234-26-141\" (UID: \"7fd7db43d57c4031f94130845af0aac3\") " pod="kube-system/kube-controller-manager-172-234-26-141" Nov 1 00:25:06.100981 kubelet[2546]: I1101 00:25:06.100893 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da789d0f53b0d289258e622e55c4ce2f-kubeconfig\") pod \"kube-scheduler-172-234-26-141\" (UID: \"da789d0f53b0d289258e622e55c4ce2f\") " pod="kube-system/kube-scheduler-172-234-26-141" Nov 1 00:25:06.100981 kubelet[2546]: I1101 00:25:06.100906 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a433feb0a7c09d48de2ed47656bb0c-ca-certs\") pod \"kube-apiserver-172-234-26-141\" (UID: \"68a433feb0a7c09d48de2ed47656bb0c\") " pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:06.100981 kubelet[2546]: I1101 00:25:06.100937 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68a433feb0a7c09d48de2ed47656bb0c-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-26-141\" (UID: \"68a433feb0a7c09d48de2ed47656bb0c\") " pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:06.324667 kubelet[2546]: E1101 00:25:06.324409 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:06.324667 kubelet[2546]: E1101 00:25:06.324579 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:06.325439 kubelet[2546]: E1101 00:25:06.324970 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:06.875925 kubelet[2546]: I1101 00:25:06.875861 2546 apiserver.go:52] "Watching apiserver" Nov 1 00:25:06.899885 kubelet[2546]: I1101 00:25:06.899835 2546 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:25:06.946741 kubelet[2546]: I1101 00:25:06.946693 2546 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:06.947316 kubelet[2546]: E1101 00:25:06.947291 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:06.950446 kubelet[2546]: E1101 00:25:06.950367 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:06.963218 kubelet[2546]: E1101 00:25:06.963169 2546 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-26-141\" already exists" pod="kube-system/kube-apiserver-172-234-26-141" Nov 1 00:25:06.963382 kubelet[2546]: E1101 00:25:06.963284 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:06.970696 kubelet[2546]: I1101 00:25:06.970634 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-26-141" podStartSLOduration=0.970623208 podStartE2EDuration="970.623208ms" podCreationTimestamp="2025-11-01 00:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:06.968752745 +0000 UTC m=+1.159690860" watchObservedRunningTime="2025-11-01 00:25:06.970623208 +0000 UTC m=+1.161561303" Nov 1 00:25:06.990050 kubelet[2546]: I1101 00:25:06.988086 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-26-141" podStartSLOduration=0.988078274 podStartE2EDuration="988.078274ms" podCreationTimestamp="2025-11-01 00:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:06.980653053 +0000 UTC m=+1.171591168" watchObservedRunningTime="2025-11-01 00:25:06.988078274 +0000 UTC m=+1.179016369" Nov 1 00:25:06.999118 kubelet[2546]: I1101 00:25:06.999066 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-26-141" podStartSLOduration=0.999060451 podStartE2EDuration="999.060451ms" podCreationTimestamp="2025-11-01 00:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:06.988801755 +0000 UTC m=+1.179739840" watchObservedRunningTime="2025-11-01 00:25:06.999060451 +0000 UTC m=+1.189998536" Nov 1 00:25:07.948768 kubelet[2546]: E1101 00:25:07.947967 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:07.948768 kubelet[2546]: E1101 00:25:07.948484 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:11.356533 kubelet[2546]: I1101 00:25:11.356473 2546 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:25:11.357280 containerd[1464]: time="2025-11-01T00:25:11.357207249Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:25:11.358569 kubelet[2546]: I1101 00:25:11.357496 2546 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:25:12.231255 systemd[1]: Created slice kubepods-besteffort-podcffe1c56_2e11_4e00_b418_ad3c13106f32.slice - libcontainer container kubepods-besteffort-podcffe1c56_2e11_4e00_b418_ad3c13106f32.slice. Nov 1 00:25:12.241346 kubelet[2546]: I1101 00:25:12.241288 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkz9\" (UniqueName: \"kubernetes.io/projected/cffe1c56-2e11-4e00-b418-ad3c13106f32-kube-api-access-pjkz9\") pod \"kube-proxy-lzfg2\" (UID: \"cffe1c56-2e11-4e00-b418-ad3c13106f32\") " pod="kube-system/kube-proxy-lzfg2" Nov 1 00:25:12.241346 kubelet[2546]: I1101 00:25:12.241330 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cffe1c56-2e11-4e00-b418-ad3c13106f32-kube-proxy\") pod \"kube-proxy-lzfg2\" (UID: \"cffe1c56-2e11-4e00-b418-ad3c13106f32\") " pod="kube-system/kube-proxy-lzfg2" Nov 1 00:25:12.241346 kubelet[2546]: I1101 00:25:12.241351 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cffe1c56-2e11-4e00-b418-ad3c13106f32-xtables-lock\") pod \"kube-proxy-lzfg2\" (UID: \"cffe1c56-2e11-4e00-b418-ad3c13106f32\") " pod="kube-system/kube-proxy-lzfg2" Nov 1 00:25:12.241478 kubelet[2546]: I1101 00:25:12.241367 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cffe1c56-2e11-4e00-b418-ad3c13106f32-lib-modules\") pod \"kube-proxy-lzfg2\" (UID: \"cffe1c56-2e11-4e00-b418-ad3c13106f32\") " pod="kube-system/kube-proxy-lzfg2" Nov 1 00:25:12.253060 kubelet[2546]: E1101 00:25:12.252190 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:12.531500 systemd[1]: Created slice kubepods-besteffort-podd7b1d8f3_1a37_40e3_8317_efba7156d99c.slice - libcontainer container kubepods-besteffort-podd7b1d8f3_1a37_40e3_8317_efba7156d99c.slice. Nov 1 00:25:12.542081 kubelet[2546]: E1101 00:25:12.541305 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:12.543992 kubelet[2546]: I1101 00:25:12.543707 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5mws\" (UniqueName: \"kubernetes.io/projected/d7b1d8f3-1a37-40e3-8317-efba7156d99c-kube-api-access-g5mws\") pod \"tigera-operator-65cdcdfd6d-nb6q7\" (UID: \"d7b1d8f3-1a37-40e3-8317-efba7156d99c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nb6q7" Nov 1 00:25:12.544215 kubelet[2546]: I1101 00:25:12.543890 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7b1d8f3-1a37-40e3-8317-efba7156d99c-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-nb6q7\" (UID: \"d7b1d8f3-1a37-40e3-8317-efba7156d99c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nb6q7" Nov 1 00:25:12.544249 containerd[1464]: time="2025-11-01T00:25:12.544057458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzfg2,Uid:cffe1c56-2e11-4e00-b418-ad3c13106f32,Namespace:kube-system,Attempt:0,}" Nov 1 00:25:12.583312 containerd[1464]: time="2025-11-01T00:25:12.583185920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:12.583312 containerd[1464]: time="2025-11-01T00:25:12.583262455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:12.583312 containerd[1464]: time="2025-11-01T00:25:12.583277468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:12.584342 containerd[1464]: time="2025-11-01T00:25:12.584109729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:12.617242 systemd[1]: Started cri-containerd-4f675a1ddd6ca775c5d4e067e9a00919222de99694be50240d885cbca18c6a92.scope - libcontainer container 4f675a1ddd6ca775c5d4e067e9a00919222de99694be50240d885cbca18c6a92. Nov 1 00:25:12.646814 containerd[1464]: time="2025-11-01T00:25:12.646722231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzfg2,Uid:cffe1c56-2e11-4e00-b418-ad3c13106f32,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f675a1ddd6ca775c5d4e067e9a00919222de99694be50240d885cbca18c6a92\"" Nov 1 00:25:12.648591 kubelet[2546]: E1101 00:25:12.648550 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:12.655937 containerd[1464]: time="2025-11-01T00:25:12.655581713Z" level=info msg="CreateContainer within sandbox \"4f675a1ddd6ca775c5d4e067e9a00919222de99694be50240d885cbca18c6a92\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:25:12.671395 containerd[1464]: time="2025-11-01T00:25:12.671361226Z" level=info msg="CreateContainer within sandbox \"4f675a1ddd6ca775c5d4e067e9a00919222de99694be50240d885cbca18c6a92\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"af0e044607db443a63e90e643b8c20873ccb9fe47eb757166da48d9c45820e8b\"" Nov 1 00:25:12.672882 containerd[1464]: time="2025-11-01T00:25:12.672819303Z" level=info msg="StartContainer for \"af0e044607db443a63e90e643b8c20873ccb9fe47eb757166da48d9c45820e8b\"" Nov 1 00:25:12.707178 systemd[1]: Started cri-containerd-af0e044607db443a63e90e643b8c20873ccb9fe47eb757166da48d9c45820e8b.scope - libcontainer container af0e044607db443a63e90e643b8c20873ccb9fe47eb757166da48d9c45820e8b. Nov 1 00:25:12.739825 containerd[1464]: time="2025-11-01T00:25:12.739775276Z" level=info msg="StartContainer for \"af0e044607db443a63e90e643b8c20873ccb9fe47eb757166da48d9c45820e8b\" returns successfully" Nov 1 00:25:12.841608 containerd[1464]: time="2025-11-01T00:25:12.841571515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nb6q7,Uid:d7b1d8f3-1a37-40e3-8317-efba7156d99c,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:25:12.871746 containerd[1464]: time="2025-11-01T00:25:12.868006563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:12.871746 containerd[1464]: time="2025-11-01T00:25:12.870143480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:12.871746 containerd[1464]: time="2025-11-01T00:25:12.870162371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:12.871746 containerd[1464]: time="2025-11-01T00:25:12.870254588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:12.892224 systemd[1]: Started cri-containerd-65ee8bc384a5a5fabe44356be231fcc38aad274168fd06917e63b63d21aa6553.scope - libcontainer container 65ee8bc384a5a5fabe44356be231fcc38aad274168fd06917e63b63d21aa6553. Nov 1 00:25:12.948439 containerd[1464]: time="2025-11-01T00:25:12.948404505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nb6q7,Uid:d7b1d8f3-1a37-40e3-8317-efba7156d99c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"65ee8bc384a5a5fabe44356be231fcc38aad274168fd06917e63b63d21aa6553\"" Nov 1 00:25:12.952158 containerd[1464]: time="2025-11-01T00:25:12.952123979Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:25:12.962319 kubelet[2546]: E1101 00:25:12.962297 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:12.963225 kubelet[2546]: E1101 00:25:12.963048 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:12.987474 kubelet[2546]: I1101 00:25:12.987316 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzfg2" podStartSLOduration=0.98730053 podStartE2EDuration="987.30053ms" podCreationTimestamp="2025-11-01 00:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:12.987267897 +0000 UTC m=+7.178205982" watchObservedRunningTime="2025-11-01 00:25:12.98730053 +0000 UTC m=+7.178238615" Nov 1 00:25:13.857957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41455903.mount: Deactivated successfully. Nov 1 00:25:13.863478 kubelet[2546]: E1101 00:25:13.863452 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:13.964076 kubelet[2546]: E1101 00:25:13.963619 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:14.745837 containerd[1464]: time="2025-11-01T00:25:14.745767177Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:14.746982 containerd[1464]: time="2025-11-01T00:25:14.746729629Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:25:14.749052 containerd[1464]: time="2025-11-01T00:25:14.747594146Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:14.750740 containerd[1464]: time="2025-11-01T00:25:14.750704967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:14.752218 containerd[1464]: time="2025-11-01T00:25:14.752184643Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.799450558s" Nov 1 00:25:14.752259 containerd[1464]: time="2025-11-01T00:25:14.752221285Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:25:14.756226 containerd[1464]: time="2025-11-01T00:25:14.756190663Z" level=info msg="CreateContainer within sandbox \"65ee8bc384a5a5fabe44356be231fcc38aad274168fd06917e63b63d21aa6553\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:25:14.778535 containerd[1464]: time="2025-11-01T00:25:14.778473320Z" level=info msg="CreateContainer within sandbox \"65ee8bc384a5a5fabe44356be231fcc38aad274168fd06917e63b63d21aa6553\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"94800ea3df59eba643fd5995b9c93a3312c48295aaa73e64b1a687f99732bfa6\"" Nov 1 00:25:14.780723 containerd[1464]: time="2025-11-01T00:25:14.779931225Z" level=info msg="StartContainer for \"94800ea3df59eba643fd5995b9c93a3312c48295aaa73e64b1a687f99732bfa6\"" Nov 1 00:25:14.823239 systemd[1]: Started cri-containerd-94800ea3df59eba643fd5995b9c93a3312c48295aaa73e64b1a687f99732bfa6.scope - libcontainer container 94800ea3df59eba643fd5995b9c93a3312c48295aaa73e64b1a687f99732bfa6. Nov 1 00:25:14.858547 containerd[1464]: time="2025-11-01T00:25:14.858300744Z" level=info msg="StartContainer for \"94800ea3df59eba643fd5995b9c93a3312c48295aaa73e64b1a687f99732bfa6\" returns successfully" Nov 1 00:25:15.290995 kubelet[2546]: E1101 00:25:15.289707 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:15.309957 kubelet[2546]: I1101 00:25:15.309896 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-nb6q7" podStartSLOduration=1.507657643 podStartE2EDuration="3.309879527s" podCreationTimestamp="2025-11-01 00:25:12 +0000 UTC" firstStartedPulling="2025-11-01 00:25:12.951073691 +0000 UTC m=+7.142011776" lastFinishedPulling="2025-11-01 00:25:14.753295565 +0000 UTC m=+8.944233660" observedRunningTime="2025-11-01 00:25:14.976963979 +0000 UTC m=+9.167902094" watchObservedRunningTime="2025-11-01 00:25:15.309879527 +0000 UTC m=+9.500817642" Nov 1 00:25:15.969824 kubelet[2546]: E1101 00:25:15.969777 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:19.091090 update_engine[1452]: I20251101 00:25:19.090138 1452 update_attempter.cc:509] Updating boot flags... Nov 1 00:25:19.177071 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2924) Nov 1 00:25:19.311096 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2927) Nov 1 00:25:19.407321 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2927) Nov 1 00:25:20.555220 sudo[1686]: pam_unix(sudo:session): session closed for user root Nov 1 00:25:20.608235 sshd[1683]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:20.614704 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:25:20.615757 systemd[1]: sshd@6-172.234.26.141:22-139.178.68.195:56722.service: Deactivated successfully. Nov 1 00:25:20.621340 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:25:20.622392 systemd[1]: session-7.scope: Consumed 4.836s CPU time, 157.9M memory peak, 0B memory swap peak. Nov 1 00:25:20.627000 systemd-logind[1448]: Removed session 7. Nov 1 00:25:26.033632 systemd[1]: Created slice kubepods-besteffort-pod7883e5a7_311e_458b_b33b_a0d12c3ebf2f.slice - libcontainer container kubepods-besteffort-pod7883e5a7_311e_458b_b33b_a0d12c3ebf2f.slice. Nov 1 00:25:26.135784 kubelet[2546]: I1101 00:25:26.135683 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tzgk\" (UniqueName: \"kubernetes.io/projected/7883e5a7-311e-458b-b33b-a0d12c3ebf2f-kube-api-access-7tzgk\") pod \"calico-typha-fb7b8b794-mjfhg\" (UID: \"7883e5a7-311e-458b-b33b-a0d12c3ebf2f\") " pod="calico-system/calico-typha-fb7b8b794-mjfhg" Nov 1 00:25:26.135784 kubelet[2546]: I1101 00:25:26.135732 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7883e5a7-311e-458b-b33b-a0d12c3ebf2f-tigera-ca-bundle\") pod \"calico-typha-fb7b8b794-mjfhg\" (UID: \"7883e5a7-311e-458b-b33b-a0d12c3ebf2f\") " pod="calico-system/calico-typha-fb7b8b794-mjfhg" Nov 1 00:25:26.135784 kubelet[2546]: I1101 00:25:26.135759 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7883e5a7-311e-458b-b33b-a0d12c3ebf2f-typha-certs\") pod \"calico-typha-fb7b8b794-mjfhg\" (UID: \"7883e5a7-311e-458b-b33b-a0d12c3ebf2f\") " pod="calico-system/calico-typha-fb7b8b794-mjfhg" Nov 1 00:25:26.225346 systemd[1]: Created slice kubepods-besteffort-pod46b67824_fcbc_470a_b52b_a6d749a015d7.slice - libcontainer container kubepods-besteffort-pod46b67824_fcbc_470a_b52b_a6d749a015d7.slice. Nov 1 00:25:26.236896 kubelet[2546]: I1101 00:25:26.236805 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-lib-modules\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.236990 kubelet[2546]: I1101 00:25:26.236916 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-cni-log-dir\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.237111 kubelet[2546]: I1101 00:25:26.237068 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-cni-bin-dir\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.237233 kubelet[2546]: I1101 00:25:26.237196 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-var-run-calico\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.237317 kubelet[2546]: I1101 00:25:26.237285 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-xtables-lock\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.237422 kubelet[2546]: I1101 00:25:26.237381 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-cni-net-dir\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.237565 kubelet[2546]: I1101 00:25:26.237487 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-policysync\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.237657 kubelet[2546]: I1101 00:25:26.237629 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kpc\" (UniqueName: \"kubernetes.io/projected/46b67824-fcbc-470a-b52b-a6d749a015d7-kube-api-access-g5kpc\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.237761 kubelet[2546]: I1101 00:25:26.237728 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/46b67824-fcbc-470a-b52b-a6d749a015d7-node-certs\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.238109 kubelet[2546]: I1101 00:25:26.238084 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46b67824-fcbc-470a-b52b-a6d749a015d7-tigera-ca-bundle\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.238180 kubelet[2546]: I1101 00:25:26.238115 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-var-lib-calico\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.238180 kubelet[2546]: I1101 00:25:26.238155 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/46b67824-fcbc-470a-b52b-a6d749a015d7-flexvol-driver-host\") pod \"calico-node-tp95l\" (UID: \"46b67824-fcbc-470a-b52b-a6d749a015d7\") " pod="calico-system/calico-node-tp95l" Nov 1 00:25:26.341479 kubelet[2546]: E1101 00:25:26.341353 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.341479 kubelet[2546]: W1101 00:25:26.341373 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.341479 kubelet[2546]: E1101 00:25:26.341397 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.341890 kubelet[2546]: E1101 00:25:26.341793 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.341890 kubelet[2546]: W1101 00:25:26.341807 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.341890 kubelet[2546]: E1101 00:25:26.341816 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.342389 kubelet[2546]: E1101 00:25:26.342234 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.342389 kubelet[2546]: W1101 00:25:26.342247 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.342389 kubelet[2546]: E1101 00:25:26.342257 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.344455 kubelet[2546]: E1101 00:25:26.344441 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.344515 kubelet[2546]: W1101 00:25:26.344504 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.344571 kubelet[2546]: E1101 00:25:26.344561 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.346480 kubelet[2546]: E1101 00:25:26.346365 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.346480 kubelet[2546]: W1101 00:25:26.346380 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.346480 kubelet[2546]: E1101 00:25:26.346390 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.346954 kubelet[2546]: E1101 00:25:26.346858 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.346954 kubelet[2546]: W1101 00:25:26.346870 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.346954 kubelet[2546]: E1101 00:25:26.346880 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.347878 kubelet[2546]: E1101 00:25:26.347461 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.347878 kubelet[2546]: W1101 00:25:26.347475 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.347878 kubelet[2546]: E1101 00:25:26.347484 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.348406 kubelet[2546]: E1101 00:25:26.348370 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:26.349263 containerd[1464]: time="2025-11-01T00:25:26.348821296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb7b8b794-mjfhg,Uid:7883e5a7-311e-458b-b33b-a0d12c3ebf2f,Namespace:calico-system,Attempt:0,}" Nov 1 00:25:26.349759 kubelet[2546]: E1101 00:25:26.349730 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.349824 kubelet[2546]: W1101 00:25:26.349784 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.349824 kubelet[2546]: E1101 00:25:26.349798 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.350691 kubelet[2546]: E1101 00:25:26.350458 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.350691 kubelet[2546]: W1101 00:25:26.350474 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.350691 kubelet[2546]: E1101 00:25:26.350487 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.371545 kubelet[2546]: E1101 00:25:26.371527 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.371839 kubelet[2546]: W1101 00:25:26.371699 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.371839 kubelet[2546]: E1101 00:25:26.371722 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.391225 containerd[1464]: time="2025-11-01T00:25:26.390995173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:26.391225 containerd[1464]: time="2025-11-01T00:25:26.391164628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:26.391709 containerd[1464]: time="2025-11-01T00:25:26.391341313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:26.394126 containerd[1464]: time="2025-11-01T00:25:26.394086037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:26.427761 systemd[1]: Started cri-containerd-388a8b5304604cd7def1632fcd9856cde4b95a9c2649d398f472b89f22926950.scope - libcontainer container 388a8b5304604cd7def1632fcd9856cde4b95a9c2649d398f472b89f22926950. Nov 1 00:25:26.466784 kubelet[2546]: E1101 00:25:26.466642 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:26.497776 containerd[1464]: time="2025-11-01T00:25:26.497637391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb7b8b794-mjfhg,Uid:7883e5a7-311e-458b-b33b-a0d12c3ebf2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"388a8b5304604cd7def1632fcd9856cde4b95a9c2649d398f472b89f22926950\"" Nov 1 00:25:26.514066 kubelet[2546]: E1101 00:25:26.512667 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:26.527232 containerd[1464]: time="2025-11-01T00:25:26.527197619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:25:26.533104 kubelet[2546]: E1101 00:25:26.532692 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:26.533449 containerd[1464]: time="2025-11-01T00:25:26.533348818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tp95l,Uid:46b67824-fcbc-470a-b52b-a6d749a015d7,Namespace:calico-system,Attempt:0,}" Nov 1 00:25:26.542068 kubelet[2546]: E1101 00:25:26.541456 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.542068 kubelet[2546]: W1101 00:25:26.541471 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.542068 kubelet[2546]: E1101 00:25:26.541487 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.542068 kubelet[2546]: E1101 00:25:26.541765 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.542068 kubelet[2546]: W1101 00:25:26.541775 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.542068 kubelet[2546]: E1101 00:25:26.541785 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.543904 kubelet[2546]: E1101 00:25:26.543702 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.543904 kubelet[2546]: W1101 00:25:26.543723 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.543904 kubelet[2546]: E1101 00:25:26.543738 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.544085 kubelet[2546]: E1101 00:25:26.544069 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.544085 kubelet[2546]: W1101 00:25:26.544079 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.544138 kubelet[2546]: E1101 00:25:26.544089 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.545150 kubelet[2546]: E1101 00:25:26.544519 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.545150 kubelet[2546]: W1101 00:25:26.544539 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.545150 kubelet[2546]: E1101 00:25:26.544552 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.545150 kubelet[2546]: I1101 00:25:26.544595 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12cca151-8712-4604-9035-7f2e07caab0c-kubelet-dir\") pod \"csi-node-driver-8hdgb\" (UID: \"12cca151-8712-4604-9035-7f2e07caab0c\") " pod="calico-system/csi-node-driver-8hdgb" Nov 1 00:25:26.545150 kubelet[2546]: E1101 00:25:26.544990 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.545150 kubelet[2546]: W1101 00:25:26.545001 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.545150 kubelet[2546]: E1101 00:25:26.545011 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.548055 kubelet[2546]: E1101 00:25:26.545718 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.548055 kubelet[2546]: W1101 00:25:26.545729 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.548055 kubelet[2546]: E1101 00:25:26.545738 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.548055 kubelet[2546]: E1101 00:25:26.545984 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.548055 kubelet[2546]: W1101 00:25:26.545992 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.548055 kubelet[2546]: E1101 00:25:26.546000 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.548277 kubelet[2546]: E1101 00:25:26.548255 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.548277 kubelet[2546]: W1101 00:25:26.548274 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.548368 kubelet[2546]: E1101 00:25:26.548287 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.548537 kubelet[2546]: E1101 00:25:26.548508 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.548537 kubelet[2546]: W1101 00:25:26.548519 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.548537 kubelet[2546]: E1101 00:25:26.548528 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.548771 kubelet[2546]: E1101 00:25:26.548750 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.548771 kubelet[2546]: W1101 00:25:26.548758 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.548771 kubelet[2546]: E1101 00:25:26.548766 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.549049 kubelet[2546]: E1101 00:25:26.548997 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.549095 kubelet[2546]: W1101 00:25:26.549016 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.549095 kubelet[2546]: E1101 00:25:26.549084 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.549696 kubelet[2546]: E1101 00:25:26.549658 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.549696 kubelet[2546]: W1101 00:25:26.549676 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.549696 kubelet[2546]: E1101 00:25:26.549689 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.550703 kubelet[2546]: E1101 00:25:26.550015 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.550703 kubelet[2546]: W1101 00:25:26.550172 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.550703 kubelet[2546]: E1101 00:25:26.550189 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.550703 kubelet[2546]: E1101 00:25:26.550677 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.550703 kubelet[2546]: W1101 00:25:26.550688 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.550703 kubelet[2546]: E1101 00:25:26.550698 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.552266 kubelet[2546]: E1101 00:25:26.552243 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.552266 kubelet[2546]: W1101 00:25:26.552261 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.552362 kubelet[2546]: E1101 00:25:26.552275 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.552740 kubelet[2546]: E1101 00:25:26.552683 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.552974 kubelet[2546]: W1101 00:25:26.552739 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.552974 kubelet[2546]: E1101 00:25:26.552753 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.553332 kubelet[2546]: E1101 00:25:26.553289 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.553332 kubelet[2546]: W1101 00:25:26.553309 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.553332 kubelet[2546]: E1101 00:25:26.553325 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.553829 kubelet[2546]: E1101 00:25:26.553777 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.553829 kubelet[2546]: W1101 00:25:26.553799 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.553829 kubelet[2546]: E1101 00:25:26.553816 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.555284 kubelet[2546]: E1101 00:25:26.554937 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.555284 kubelet[2546]: W1101 00:25:26.555278 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.555360 kubelet[2546]: E1101 00:25:26.555293 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.556237 kubelet[2546]: E1101 00:25:26.556199 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.556237 kubelet[2546]: W1101 00:25:26.556218 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.556237 kubelet[2546]: E1101 00:25:26.556231 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.556679 kubelet[2546]: E1101 00:25:26.556614 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.556679 kubelet[2546]: W1101 00:25:26.556673 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.556679 kubelet[2546]: E1101 00:25:26.556685 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.558004 kubelet[2546]: E1101 00:25:26.557909 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.558004 kubelet[2546]: W1101 00:25:26.557925 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.558004 kubelet[2546]: E1101 00:25:26.557935 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.591993 containerd[1464]: time="2025-11-01T00:25:26.591703142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:26.594540 containerd[1464]: time="2025-11-01T00:25:26.592539748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:26.594540 containerd[1464]: time="2025-11-01T00:25:26.592611180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:26.594540 containerd[1464]: time="2025-11-01T00:25:26.594125446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:26.627192 systemd[1]: Started cri-containerd-0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13.scope - libcontainer container 0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13. Nov 1 00:25:26.645610 kubelet[2546]: E1101 00:25:26.645541 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.646428 kubelet[2546]: W1101 00:25:26.646079 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.646428 kubelet[2546]: E1101 00:25:26.646107 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.646428 kubelet[2546]: I1101 00:25:26.646129 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12cca151-8712-4604-9035-7f2e07caab0c-registration-dir\") pod \"csi-node-driver-8hdgb\" (UID: \"12cca151-8712-4604-9035-7f2e07caab0c\") " pod="calico-system/csi-node-driver-8hdgb" Nov 1 00:25:26.647408 kubelet[2546]: E1101 00:25:26.646999 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.647408 kubelet[2546]: W1101 00:25:26.647018 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.647408 kubelet[2546]: E1101 00:25:26.647117 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.647762 kubelet[2546]: I1101 00:25:26.647593 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz59m\" (UniqueName: \"kubernetes.io/projected/12cca151-8712-4604-9035-7f2e07caab0c-kube-api-access-wz59m\") pod \"csi-node-driver-8hdgb\" (UID: \"12cca151-8712-4604-9035-7f2e07caab0c\") " pod="calico-system/csi-node-driver-8hdgb" Nov 1 00:25:26.647882 kubelet[2546]: E1101 00:25:26.647866 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.648064 kubelet[2546]: W1101 00:25:26.647922 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.648064 kubelet[2546]: E1101 00:25:26.647941 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.649352 kubelet[2546]: E1101 00:25:26.649055 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.649352 kubelet[2546]: W1101 00:25:26.649074 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.649352 kubelet[2546]: E1101 00:25:26.649086 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.649534 kubelet[2546]: E1101 00:25:26.649521 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.649592 kubelet[2546]: W1101 00:25:26.649580 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.649638 kubelet[2546]: E1101 00:25:26.649628 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.649802 kubelet[2546]: I1101 00:25:26.649768 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12cca151-8712-4604-9035-7f2e07caab0c-socket-dir\") pod \"csi-node-driver-8hdgb\" (UID: \"12cca151-8712-4604-9035-7f2e07caab0c\") " pod="calico-system/csi-node-driver-8hdgb" Nov 1 00:25:26.650211 kubelet[2546]: E1101 00:25:26.650080 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.650211 kubelet[2546]: W1101 00:25:26.650094 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.650211 kubelet[2546]: E1101 00:25:26.650105 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.650646 kubelet[2546]: E1101 00:25:26.650522 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.650646 kubelet[2546]: W1101 00:25:26.650534 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.650646 kubelet[2546]: E1101 00:25:26.650543 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.650907 kubelet[2546]: E1101 00:25:26.650876 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.651312 kubelet[2546]: W1101 00:25:26.650993 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.651312 kubelet[2546]: E1101 00:25:26.651091 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.651312 kubelet[2546]: I1101 00:25:26.651112 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/12cca151-8712-4604-9035-7f2e07caab0c-varrun\") pod \"csi-node-driver-8hdgb\" (UID: \"12cca151-8712-4604-9035-7f2e07caab0c\") " pod="calico-system/csi-node-driver-8hdgb" Nov 1 00:25:26.651989 kubelet[2546]: E1101 00:25:26.651965 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.652207 kubelet[2546]: W1101 00:25:26.652074 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.652207 kubelet[2546]: E1101 00:25:26.652090 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.652736 kubelet[2546]: E1101 00:25:26.652622 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.652736 kubelet[2546]: W1101 00:25:26.652636 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.652736 kubelet[2546]: E1101 00:25:26.652646 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.653300 kubelet[2546]: E1101 00:25:26.653191 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.653300 kubelet[2546]: W1101 00:25:26.653219 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.653300 kubelet[2546]: E1101 00:25:26.653280 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.653937 kubelet[2546]: E1101 00:25:26.653803 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.653937 kubelet[2546]: W1101 00:25:26.653814 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.653937 kubelet[2546]: E1101 00:25:26.653846 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.654488 kubelet[2546]: E1101 00:25:26.654361 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.654488 kubelet[2546]: W1101 00:25:26.654374 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.654488 kubelet[2546]: E1101 00:25:26.654384 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.654796 kubelet[2546]: E1101 00:25:26.654761 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.654796 kubelet[2546]: W1101 00:25:26.654774 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.654796 kubelet[2546]: E1101 00:25:26.654785 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.655320 kubelet[2546]: E1101 00:25:26.655216 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.655320 kubelet[2546]: W1101 00:25:26.655229 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.655320 kubelet[2546]: E1101 00:25:26.655238 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.655821 kubelet[2546]: E1101 00:25:26.655687 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.655821 kubelet[2546]: W1101 00:25:26.655716 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.655821 kubelet[2546]: E1101 00:25:26.655731 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.656320 kubelet[2546]: E1101 00:25:26.656257 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.656320 kubelet[2546]: W1101 00:25:26.656271 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.656320 kubelet[2546]: E1101 00:25:26.656280 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.687705 containerd[1464]: time="2025-11-01T00:25:26.687567018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tp95l,Uid:46b67824-fcbc-470a-b52b-a6d749a015d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13\"" Nov 1 00:25:26.689157 kubelet[2546]: E1101 00:25:26.688918 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:26.753342 kubelet[2546]: E1101 00:25:26.753205 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.753342 kubelet[2546]: W1101 00:25:26.753226 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.753342 kubelet[2546]: E1101 00:25:26.753244 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.754015 kubelet[2546]: E1101 00:25:26.753809 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.754015 kubelet[2546]: W1101 00:25:26.753821 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.754015 kubelet[2546]: E1101 00:25:26.753832 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.754310 kubelet[2546]: E1101 00:25:26.754184 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.754310 kubelet[2546]: W1101 00:25:26.754194 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.754310 kubelet[2546]: E1101 00:25:26.754203 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.754463 kubelet[2546]: E1101 00:25:26.754450 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.754517 kubelet[2546]: W1101 00:25:26.754506 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.754636 kubelet[2546]: E1101 00:25:26.754551 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.754796 kubelet[2546]: E1101 00:25:26.754784 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.754984 kubelet[2546]: W1101 00:25:26.754884 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.754984 kubelet[2546]: E1101 00:25:26.754899 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.755497 kubelet[2546]: E1101 00:25:26.755454 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.755497 kubelet[2546]: W1101 00:25:26.755467 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.755497 kubelet[2546]: E1101 00:25:26.755479 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.756010 kubelet[2546]: E1101 00:25:26.755903 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.756010 kubelet[2546]: W1101 00:25:26.755913 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.756010 kubelet[2546]: E1101 00:25:26.755922 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.756520 kubelet[2546]: E1101 00:25:26.756407 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.756520 kubelet[2546]: W1101 00:25:26.756418 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.756520 kubelet[2546]: E1101 00:25:26.756427 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.757013 kubelet[2546]: E1101 00:25:26.756901 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.757013 kubelet[2546]: W1101 00:25:26.756914 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.757013 kubelet[2546]: E1101 00:25:26.756923 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.757420 kubelet[2546]: E1101 00:25:26.757407 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.757590 kubelet[2546]: W1101 00:25:26.757488 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.757590 kubelet[2546]: E1101 00:25:26.757503 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.757916 kubelet[2546]: E1101 00:25:26.757872 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.757916 kubelet[2546]: W1101 00:25:26.757883 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.757916 kubelet[2546]: E1101 00:25:26.757892 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.758499 kubelet[2546]: E1101 00:25:26.758408 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.758499 kubelet[2546]: W1101 00:25:26.758424 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.758499 kubelet[2546]: E1101 00:25:26.758433 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.759078 kubelet[2546]: E1101 00:25:26.758957 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.759078 kubelet[2546]: W1101 00:25:26.758972 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.759078 kubelet[2546]: E1101 00:25:26.758984 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.759639 kubelet[2546]: E1101 00:25:26.759504 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.759639 kubelet[2546]: W1101 00:25:26.759515 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.759639 kubelet[2546]: E1101 00:25:26.759524 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.760139 kubelet[2546]: E1101 00:25:26.759968 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.760139 kubelet[2546]: W1101 00:25:26.759980 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.760139 kubelet[2546]: E1101 00:25:26.759998 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.760363 kubelet[2546]: E1101 00:25:26.760307 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.760507 kubelet[2546]: W1101 00:25:26.760414 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.760507 kubelet[2546]: E1101 00:25:26.760428 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.760908 kubelet[2546]: E1101 00:25:26.760804 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.760908 kubelet[2546]: W1101 00:25:26.760816 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.760908 kubelet[2546]: E1101 00:25:26.760825 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.761482 kubelet[2546]: E1101 00:25:26.761380 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.761482 kubelet[2546]: W1101 00:25:26.761392 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.761482 kubelet[2546]: E1101 00:25:26.761401 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.761928 kubelet[2546]: E1101 00:25:26.761827 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.761928 kubelet[2546]: W1101 00:25:26.761838 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.761928 kubelet[2546]: E1101 00:25:26.761847 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.762447 kubelet[2546]: E1101 00:25:26.762322 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.762447 kubelet[2546]: W1101 00:25:26.762334 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.762447 kubelet[2546]: E1101 00:25:26.762343 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:26.770982 kubelet[2546]: E1101 00:25:26.770895 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:26.771123 kubelet[2546]: W1101 00:25:26.771089 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:26.771123 kubelet[2546]: E1101 00:25:26.771119 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:27.321193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700846607.mount: Deactivated successfully. Nov 1 00:25:27.918922 kubelet[2546]: E1101 00:25:27.918865 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:27.971688 containerd[1464]: time="2025-11-01T00:25:27.971600364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:27.972639 containerd[1464]: time="2025-11-01T00:25:27.972590903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:25:27.974559 containerd[1464]: time="2025-11-01T00:25:27.973262573Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:27.976065 containerd[1464]: time="2025-11-01T00:25:27.975178478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:27.976065 containerd[1464]: time="2025-11-01T00:25:27.975955090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.44872112s" Nov 1 00:25:27.976065 containerd[1464]: time="2025-11-01T00:25:27.975976881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:25:27.979339 containerd[1464]: time="2025-11-01T00:25:27.979313767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:25:27.997099 containerd[1464]: time="2025-11-01T00:25:27.997069650Z" level=info msg="CreateContainer within sandbox \"388a8b5304604cd7def1632fcd9856cde4b95a9c2649d398f472b89f22926950\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:25:28.007455 containerd[1464]: time="2025-11-01T00:25:28.007419169Z" level=info msg="CreateContainer within sandbox \"388a8b5304604cd7def1632fcd9856cde4b95a9c2649d398f472b89f22926950\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3a2da84a47fe98a4c5f31a2cc6be6b3caf2188ecd65f0d160d663fbb213c1431\"" Nov 1 00:25:28.010063 containerd[1464]: time="2025-11-01T00:25:28.007917903Z" level=info msg="StartContainer for \"3a2da84a47fe98a4c5f31a2cc6be6b3caf2188ecd65f0d160d663fbb213c1431\"" Nov 1 00:25:28.046228 systemd[1]: Started cri-containerd-3a2da84a47fe98a4c5f31a2cc6be6b3caf2188ecd65f0d160d663fbb213c1431.scope - libcontainer container 3a2da84a47fe98a4c5f31a2cc6be6b3caf2188ecd65f0d160d663fbb213c1431. Nov 1 00:25:28.109643 containerd[1464]: time="2025-11-01T00:25:28.109600888Z" level=info msg="StartContainer for \"3a2da84a47fe98a4c5f31a2cc6be6b3caf2188ecd65f0d160d663fbb213c1431\" returns successfully" Nov 1 00:25:28.911067 containerd[1464]: time="2025-11-01T00:25:28.910901732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:28.912575 containerd[1464]: time="2025-11-01T00:25:28.912534157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:25:28.913829 containerd[1464]: time="2025-11-01T00:25:28.913799680Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:28.916829 containerd[1464]: time="2025-11-01T00:25:28.916697640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:28.917778 containerd[1464]: time="2025-11-01T00:25:28.917206344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 937.146194ms" Nov 1 00:25:28.917778 containerd[1464]: time="2025-11-01T00:25:28.917236194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:25:28.924193 containerd[1464]: time="2025-11-01T00:25:28.923773971Z" level=info msg="CreateContainer within sandbox \"0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:25:28.959957 containerd[1464]: time="2025-11-01T00:25:28.959845333Z" level=info msg="CreateContainer within sandbox \"0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41\"" Nov 1 00:25:28.963069 containerd[1464]: time="2025-11-01T00:25:28.961958531Z" level=info msg="StartContainer for \"2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41\"" Nov 1 00:25:29.015815 kubelet[2546]: E1101 00:25:29.015776 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:29.038170 systemd[1]: Started cri-containerd-2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41.scope - libcontainer container 2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41. Nov 1 00:25:29.042425 kubelet[2546]: I1101 00:25:29.042362 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fb7b8b794-mjfhg" podStartSLOduration=1.587596097 podStartE2EDuration="3.04234853s" podCreationTimestamp="2025-11-01 00:25:26 +0000 UTC" firstStartedPulling="2025-11-01 00:25:26.52300133 +0000 UTC m=+20.713939415" lastFinishedPulling="2025-11-01 00:25:27.977753763 +0000 UTC m=+22.168691848" observedRunningTime="2025-11-01 00:25:29.042186797 +0000 UTC m=+23.233124902" watchObservedRunningTime="2025-11-01 00:25:29.04234853 +0000 UTC m=+23.233286645" Nov 1 00:25:29.074550 kubelet[2546]: E1101 00:25:29.074398 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.074550 kubelet[2546]: W1101 00:25:29.074424 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.074550 kubelet[2546]: E1101 00:25:29.074444 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.077353 kubelet[2546]: E1101 00:25:29.077136 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.077353 kubelet[2546]: W1101 00:25:29.077153 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.077353 kubelet[2546]: E1101 00:25:29.077168 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.077726 kubelet[2546]: E1101 00:25:29.077620 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.077726 kubelet[2546]: W1101 00:25:29.077663 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.077726 kubelet[2546]: E1101 00:25:29.077674 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.078346 kubelet[2546]: E1101 00:25:29.078129 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.078346 kubelet[2546]: W1101 00:25:29.078140 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.078346 kubelet[2546]: E1101 00:25:29.078150 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.079149 kubelet[2546]: E1101 00:25:29.079102 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.079149 kubelet[2546]: W1101 00:25:29.079122 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.079149 kubelet[2546]: E1101 00:25:29.079137 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.079990 kubelet[2546]: E1101 00:25:29.079973 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.079990 kubelet[2546]: W1101 00:25:29.079984 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.080079 kubelet[2546]: E1101 00:25:29.079995 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.080335 kubelet[2546]: E1101 00:25:29.080298 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.080335 kubelet[2546]: W1101 00:25:29.080320 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.080335 kubelet[2546]: E1101 00:25:29.080335 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.081921 kubelet[2546]: E1101 00:25:29.081883 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.081921 kubelet[2546]: W1101 00:25:29.081901 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.081921 kubelet[2546]: E1101 00:25:29.081914 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.083153 kubelet[2546]: E1101 00:25:29.083110 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.083153 kubelet[2546]: W1101 00:25:29.083145 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.083224 kubelet[2546]: E1101 00:25:29.083157 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.083774 kubelet[2546]: E1101 00:25:29.083504 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.083774 kubelet[2546]: W1101 00:25:29.083514 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.083774 kubelet[2546]: E1101 00:25:29.083523 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.083774 kubelet[2546]: E1101 00:25:29.083764 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.083774 kubelet[2546]: W1101 00:25:29.083774 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.083890 kubelet[2546]: E1101 00:25:29.083782 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.084013 kubelet[2546]: E1101 00:25:29.083989 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.084013 kubelet[2546]: W1101 00:25:29.084001 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.084013 kubelet[2546]: E1101 00:25:29.084010 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.084501 kubelet[2546]: E1101 00:25:29.084279 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.084501 kubelet[2546]: W1101 00:25:29.084287 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.084501 kubelet[2546]: E1101 00:25:29.084296 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.084762 kubelet[2546]: E1101 00:25:29.084720 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.084762 kubelet[2546]: W1101 00:25:29.084740 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.084762 kubelet[2546]: E1101 00:25:29.084752 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.086426 kubelet[2546]: E1101 00:25:29.086243 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.086426 kubelet[2546]: W1101 00:25:29.086255 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.086426 kubelet[2546]: E1101 00:25:29.086264 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.086829 kubelet[2546]: E1101 00:25:29.086704 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.086829 kubelet[2546]: W1101 00:25:29.086715 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.086829 kubelet[2546]: E1101 00:25:29.086725 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.087663 kubelet[2546]: E1101 00:25:29.087093 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.087663 kubelet[2546]: W1101 00:25:29.087102 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.087663 kubelet[2546]: E1101 00:25:29.087111 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.087663 kubelet[2546]: E1101 00:25:29.087389 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.087663 kubelet[2546]: W1101 00:25:29.087399 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.087663 kubelet[2546]: E1101 00:25:29.087407 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.088066 kubelet[2546]: E1101 00:25:29.088005 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.088066 kubelet[2546]: W1101 00:25:29.088047 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.088066 kubelet[2546]: E1101 00:25:29.088062 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.090143 kubelet[2546]: E1101 00:25:29.090126 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.090143 kubelet[2546]: W1101 00:25:29.090138 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.090268 kubelet[2546]: E1101 00:25:29.090149 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.090446 kubelet[2546]: E1101 00:25:29.090423 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.090446 kubelet[2546]: W1101 00:25:29.090440 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.090501 kubelet[2546]: E1101 00:25:29.090452 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.091070 kubelet[2546]: E1101 00:25:29.090726 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.091070 kubelet[2546]: W1101 00:25:29.090738 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.091070 kubelet[2546]: E1101 00:25:29.090747 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.091070 kubelet[2546]: E1101 00:25:29.090979 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.091070 kubelet[2546]: W1101 00:25:29.090987 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.091070 kubelet[2546]: E1101 00:25:29.090996 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.091333 kubelet[2546]: E1101 00:25:29.091307 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.091333 kubelet[2546]: W1101 00:25:29.091328 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.091389 kubelet[2546]: E1101 00:25:29.091344 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.091792 kubelet[2546]: E1101 00:25:29.091769 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.091887 kubelet[2546]: W1101 00:25:29.091862 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.091929 kubelet[2546]: E1101 00:25:29.091888 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.093308 kubelet[2546]: E1101 00:25:29.093271 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.093308 kubelet[2546]: W1101 00:25:29.093290 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.093308 kubelet[2546]: E1101 00:25:29.093303 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.094632 kubelet[2546]: E1101 00:25:29.094607 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.094632 kubelet[2546]: W1101 00:25:29.094628 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.094699 kubelet[2546]: E1101 00:25:29.094643 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.094978 kubelet[2546]: E1101 00:25:29.094950 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.094978 kubelet[2546]: W1101 00:25:29.094972 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.095108 kubelet[2546]: E1101 00:25:29.094987 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.096344 kubelet[2546]: E1101 00:25:29.096317 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.096344 kubelet[2546]: W1101 00:25:29.096340 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.096430 kubelet[2546]: E1101 00:25:29.096355 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.098270 kubelet[2546]: E1101 00:25:29.098244 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.098270 kubelet[2546]: W1101 00:25:29.098266 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.098270 kubelet[2546]: E1101 00:25:29.098281 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.098865 kubelet[2546]: E1101 00:25:29.098820 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.098865 kubelet[2546]: W1101 00:25:29.098841 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.098865 kubelet[2546]: E1101 00:25:29.098854 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.099851 kubelet[2546]: E1101 00:25:29.099826 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.099851 kubelet[2546]: W1101 00:25:29.099847 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.099932 kubelet[2546]: E1101 00:25:29.099860 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.100173 kubelet[2546]: E1101 00:25:29.100151 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:25:29.100173 kubelet[2546]: W1101 00:25:29.100170 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:25:29.100173 kubelet[2546]: E1101 00:25:29.100185 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:25:29.193392 containerd[1464]: time="2025-11-01T00:25:29.193230462Z" level=info msg="StartContainer for \"2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41\" returns successfully" Nov 1 00:25:29.224497 systemd[1]: cri-containerd-2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41.scope: Deactivated successfully. Nov 1 00:25:29.265953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41-rootfs.mount: Deactivated successfully. Nov 1 00:25:29.322538 containerd[1464]: time="2025-11-01T00:25:29.322334586Z" level=info msg="shim disconnected" id=2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41 namespace=k8s.io Nov 1 00:25:29.322538 containerd[1464]: time="2025-11-01T00:25:29.322382767Z" level=warning msg="cleaning up after shim disconnected" id=2f4b37cb50e87bce8fb22dfa65575ab5a1d29f77335ce1a318ffdba62fb03a41 namespace=k8s.io Nov 1 00:25:29.322538 containerd[1464]: time="2025-11-01T00:25:29.322395667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:29.342652 containerd[1464]: time="2025-11-01T00:25:29.341971579Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:25:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 00:25:29.918999 kubelet[2546]: E1101 00:25:29.917714 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:30.021110 kubelet[2546]: E1101 00:25:30.019912 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:30.021990 kubelet[2546]: E1101 00:25:30.020519 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:30.022661 containerd[1464]: time="2025-11-01T00:25:30.022347028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:25:31.021846 kubelet[2546]: E1101 00:25:31.021810 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:31.918345 kubelet[2546]: E1101 00:25:31.918308 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:32.293900 containerd[1464]: time="2025-11-01T00:25:32.293566792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:32.294418 containerd[1464]: time="2025-11-01T00:25:32.294353099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:25:32.296496 containerd[1464]: time="2025-11-01T00:25:32.295378280Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:32.297710 containerd[1464]: time="2025-11-01T00:25:32.297673809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:32.298980 containerd[1464]: time="2025-11-01T00:25:32.298949267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.276568918s" Nov 1 00:25:32.299121 containerd[1464]: time="2025-11-01T00:25:32.299092120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:25:32.302708 containerd[1464]: time="2025-11-01T00:25:32.302672876Z" level=info msg="CreateContainer within sandbox \"0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:25:32.316915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752177770.mount: Deactivated successfully. Nov 1 00:25:32.319413 containerd[1464]: time="2025-11-01T00:25:32.319334822Z" level=info msg="CreateContainer within sandbox \"0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58\"" Nov 1 00:25:32.320202 containerd[1464]: time="2025-11-01T00:25:32.320174800Z" level=info msg="StartContainer for \"bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58\"" Nov 1 00:25:32.363166 systemd[1]: Started cri-containerd-bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58.scope - libcontainer container bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58. Nov 1 00:25:32.397974 containerd[1464]: time="2025-11-01T00:25:32.397865348Z" level=info msg="StartContainer for \"bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58\" returns successfully" Nov 1 00:25:32.917394 containerd[1464]: time="2025-11-01T00:25:32.917331529Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:25:32.920796 systemd[1]: cri-containerd-bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58.scope: Deactivated successfully. Nov 1 00:25:32.952395 kubelet[2546]: I1101 00:25:32.952145 2546 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:25:33.001695 containerd[1464]: time="2025-11-01T00:25:33.001567058Z" level=info msg="shim disconnected" id=bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58 namespace=k8s.io Nov 1 00:25:33.001695 containerd[1464]: time="2025-11-01T00:25:33.001629639Z" level=warning msg="cleaning up after shim disconnected" id=bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58 namespace=k8s.io Nov 1 00:25:33.001695 containerd[1464]: time="2025-11-01T00:25:33.001640949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:33.018403 kubelet[2546]: I1101 00:25:33.017848 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frgd2\" (UniqueName: \"kubernetes.io/projected/c30f1fcc-7cd9-400f-884f-bd1e3091973a-kube-api-access-frgd2\") pod \"calico-kube-controllers-f5879cb96-dlw8c\" (UID: \"c30f1fcc-7cd9-400f-884f-bd1e3091973a\") " pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" Nov 1 00:25:33.018403 kubelet[2546]: I1101 00:25:33.017883 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c30f1fcc-7cd9-400f-884f-bd1e3091973a-tigera-ca-bundle\") pod \"calico-kube-controllers-f5879cb96-dlw8c\" (UID: \"c30f1fcc-7cd9-400f-884f-bd1e3091973a\") " pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" Nov 1 00:25:33.027421 systemd[1]: Created slice kubepods-besteffort-podc30f1fcc_7cd9_400f_884f_bd1e3091973a.slice - libcontainer container kubepods-besteffort-podc30f1fcc_7cd9_400f_884f_bd1e3091973a.slice. Nov 1 00:25:33.043709 systemd[1]: Created slice kubepods-burstable-pod5b66f29d_a0c7_459a_a622_8bd163fa7e38.slice - libcontainer container kubepods-burstable-pod5b66f29d_a0c7_459a_a622_8bd163fa7e38.slice. Nov 1 00:25:33.050415 kubelet[2546]: E1101 00:25:33.049572 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:33.051640 containerd[1464]: time="2025-11-01T00:25:33.050753226Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:25:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 00:25:33.061997 systemd[1]: Created slice kubepods-burstable-podb4745334_2dc8_452d_b994_9002bb77af9f.slice - libcontainer container kubepods-burstable-podb4745334_2dc8_452d_b994_9002bb77af9f.slice. Nov 1 00:25:33.073391 systemd[1]: Created slice kubepods-besteffort-pod7decf862_2dea_422d_a655_b341baeeaa59.slice - libcontainer container kubepods-besteffort-pod7decf862_2dea_422d_a655_b341baeeaa59.slice. Nov 1 00:25:33.087612 systemd[1]: Created slice kubepods-besteffort-pod6f8b1313_5d3a_421c_a1c3_861bc7b1da27.slice - libcontainer container kubepods-besteffort-pod6f8b1313_5d3a_421c_a1c3_861bc7b1da27.slice. Nov 1 00:25:33.103220 systemd[1]: Created slice kubepods-besteffort-pod1449e27d_cfd3_4b57_8ca8_d99ff2c00988.slice - libcontainer container kubepods-besteffort-pod1449e27d_cfd3_4b57_8ca8_d99ff2c00988.slice. Nov 1 00:25:33.117440 systemd[1]: Created slice kubepods-besteffort-podee70047e_c633_44d7_81f9_abf395209951.slice - libcontainer container kubepods-besteffort-podee70047e_c633_44d7_81f9_abf395209951.slice. Nov 1 00:25:33.133238 systemd[1]: Created slice kubepods-besteffort-poda394b5b4_84f6_43c3_bf21_09838f083553.slice - libcontainer container kubepods-besteffort-poda394b5b4_84f6_43c3_bf21_09838f083553.slice. Nov 1 00:25:33.220645 kubelet[2546]: I1101 00:25:33.220496 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee70047e-c633-44d7-81f9-abf395209951-whisker-backend-key-pair\") pod \"whisker-78997778df-lcfj5\" (UID: \"ee70047e-c633-44d7-81f9-abf395209951\") " pod="calico-system/whisker-78997778df-lcfj5" Nov 1 00:25:33.220645 kubelet[2546]: I1101 00:25:33.220531 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee70047e-c633-44d7-81f9-abf395209951-whisker-ca-bundle\") pod \"whisker-78997778df-lcfj5\" (UID: \"ee70047e-c633-44d7-81f9-abf395209951\") " pod="calico-system/whisker-78997778df-lcfj5" Nov 1 00:25:33.220645 kubelet[2546]: I1101 00:25:33.220551 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f8b1313-5d3a-421c-a1c3-861bc7b1da27-calico-apiserver-certs\") pod \"calico-apiserver-57df9d5c69-4s2pm\" (UID: \"6f8b1313-5d3a-421c-a1c3-861bc7b1da27\") " pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" Nov 1 00:25:33.220645 kubelet[2546]: I1101 00:25:33.220569 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4745334-2dc8-452d-b994-9002bb77af9f-config-volume\") pod \"coredns-66bc5c9577-9httv\" (UID: \"b4745334-2dc8-452d-b994-9002bb77af9f\") " pod="kube-system/coredns-66bc5c9577-9httv" Nov 1 00:25:33.220645 kubelet[2546]: I1101 00:25:33.220592 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a394b5b4-84f6-43c3-bf21-09838f083553-config\") pod \"goldmane-7c778bb748-t79g9\" (UID: \"a394b5b4-84f6-43c3-bf21-09838f083553\") " pod="calico-system/goldmane-7c778bb748-t79g9" Nov 1 00:25:33.220846 kubelet[2546]: I1101 00:25:33.220607 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b66f29d-a0c7-459a-a622-8bd163fa7e38-config-volume\") pod \"coredns-66bc5c9577-k8blm\" (UID: \"5b66f29d-a0c7-459a-a622-8bd163fa7e38\") " pod="kube-system/coredns-66bc5c9577-k8blm" Nov 1 00:25:33.220846 kubelet[2546]: I1101 00:25:33.220622 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnmwt\" (UniqueName: \"kubernetes.io/projected/5b66f29d-a0c7-459a-a622-8bd163fa7e38-kube-api-access-dnmwt\") pod \"coredns-66bc5c9577-k8blm\" (UID: \"5b66f29d-a0c7-459a-a622-8bd163fa7e38\") " pod="kube-system/coredns-66bc5c9577-k8blm" Nov 1 00:25:33.222083 kubelet[2546]: I1101 00:25:33.221294 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a394b5b4-84f6-43c3-bf21-09838f083553-goldmane-key-pair\") pod \"goldmane-7c778bb748-t79g9\" (UID: \"a394b5b4-84f6-43c3-bf21-09838f083553\") " pod="calico-system/goldmane-7c778bb748-t79g9" Nov 1 00:25:33.222083 kubelet[2546]: I1101 00:25:33.221319 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ztp\" (UniqueName: \"kubernetes.io/projected/a394b5b4-84f6-43c3-bf21-09838f083553-kube-api-access-22ztp\") pod \"goldmane-7c778bb748-t79g9\" (UID: \"a394b5b4-84f6-43c3-bf21-09838f083553\") " pod="calico-system/goldmane-7c778bb748-t79g9" Nov 1 00:25:33.222083 kubelet[2546]: I1101 00:25:33.221335 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrkg\" (UniqueName: \"kubernetes.io/projected/b4745334-2dc8-452d-b994-9002bb77af9f-kube-api-access-hdrkg\") pod \"coredns-66bc5c9577-9httv\" (UID: \"b4745334-2dc8-452d-b994-9002bb77af9f\") " pod="kube-system/coredns-66bc5c9577-9httv" Nov 1 00:25:33.222083 kubelet[2546]: I1101 00:25:33.221374 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5t2\" (UniqueName: \"kubernetes.io/projected/ee70047e-c633-44d7-81f9-abf395209951-kube-api-access-4k5t2\") pod \"whisker-78997778df-lcfj5\" (UID: \"ee70047e-c633-44d7-81f9-abf395209951\") " pod="calico-system/whisker-78997778df-lcfj5" Nov 1 00:25:33.222083 kubelet[2546]: I1101 00:25:33.221390 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1449e27d-cfd3-4b57-8ca8-d99ff2c00988-calico-apiserver-certs\") pod \"calico-apiserver-79458bd765-tc96j\" (UID: \"1449e27d-cfd3-4b57-8ca8-d99ff2c00988\") " pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" Nov 1 00:25:33.222245 kubelet[2546]: I1101 00:25:33.221406 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8z9l\" (UniqueName: \"kubernetes.io/projected/1449e27d-cfd3-4b57-8ca8-d99ff2c00988-kube-api-access-l8z9l\") pod \"calico-apiserver-79458bd765-tc96j\" (UID: \"1449e27d-cfd3-4b57-8ca8-d99ff2c00988\") " pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" Nov 1 00:25:33.222245 kubelet[2546]: I1101 00:25:33.221430 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a394b5b4-84f6-43c3-bf21-09838f083553-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-t79g9\" (UID: \"a394b5b4-84f6-43c3-bf21-09838f083553\") " pod="calico-system/goldmane-7c778bb748-t79g9" Nov 1 00:25:33.222245 kubelet[2546]: I1101 00:25:33.221444 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7decf862-2dea-422d-a655-b341baeeaa59-calico-apiserver-certs\") pod \"calico-apiserver-57df9d5c69-r82hw\" (UID: \"7decf862-2dea-422d-a655-b341baeeaa59\") " pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" Nov 1 00:25:33.222245 kubelet[2546]: I1101 00:25:33.221462 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp97d\" (UniqueName: \"kubernetes.io/projected/7decf862-2dea-422d-a655-b341baeeaa59-kube-api-access-sp97d\") pod \"calico-apiserver-57df9d5c69-r82hw\" (UID: \"7decf862-2dea-422d-a655-b341baeeaa59\") " pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" Nov 1 00:25:33.222245 kubelet[2546]: I1101 00:25:33.221479 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7h74\" (UniqueName: \"kubernetes.io/projected/6f8b1313-5d3a-421c-a1c3-861bc7b1da27-kube-api-access-k7h74\") pod \"calico-apiserver-57df9d5c69-4s2pm\" (UID: \"6f8b1313-5d3a-421c-a1c3-861bc7b1da27\") " pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" Nov 1 00:25:33.312529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfc5097cb7693db3d57e2e14a47c63af4591d8ae039e16c0d70a3940c60fed58-rootfs.mount: Deactivated successfully. Nov 1 00:25:33.351986 containerd[1464]: time="2025-11-01T00:25:33.351843272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5879cb96-dlw8c,Uid:c30f1fcc-7cd9-400f-884f-bd1e3091973a,Namespace:calico-system,Attempt:0,}" Nov 1 00:25:33.395835 containerd[1464]: time="2025-11-01T00:25:33.395795776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-4s2pm,Uid:6f8b1313-5d3a-421c-a1c3-861bc7b1da27,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:25:33.412936 containerd[1464]: time="2025-11-01T00:25:33.412680095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79458bd765-tc96j,Uid:1449e27d-cfd3-4b57-8ca8-d99ff2c00988,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:25:33.427141 containerd[1464]: time="2025-11-01T00:25:33.427115716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78997778df-lcfj5,Uid:ee70047e-c633-44d7-81f9-abf395209951,Namespace:calico-system,Attempt:0,}" Nov 1 00:25:33.453578 containerd[1464]: time="2025-11-01T00:25:33.452255092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t79g9,Uid:a394b5b4-84f6-43c3-bf21-09838f083553,Namespace:calico-system,Attempt:0,}" Nov 1 00:25:33.522746 containerd[1464]: time="2025-11-01T00:25:33.522283750Z" level=error msg="Failed to destroy network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.524207 containerd[1464]: time="2025-11-01T00:25:33.524169427Z" level=error msg="encountered an error cleaning up failed sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.524291 containerd[1464]: time="2025-11-01T00:25:33.524226028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5879cb96-dlw8c,Uid:c30f1fcc-7cd9-400f-884f-bd1e3091973a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.524598 kubelet[2546]: E1101 00:25:33.524485 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.524598 kubelet[2546]: E1101 00:25:33.524582 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" Nov 1 00:25:33.524729 kubelet[2546]: E1101 00:25:33.524637 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" Nov 1 00:25:33.524967 kubelet[2546]: E1101 00:25:33.524826 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f5879cb96-dlw8c_calico-system(c30f1fcc-7cd9-400f-884f-bd1e3091973a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f5879cb96-dlw8c_calico-system(c30f1fcc-7cd9-400f-884f-bd1e3091973a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:25:33.530254 containerd[1464]: time="2025-11-01T00:25:33.530181148Z" level=error msg="Failed to destroy network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.533653 containerd[1464]: time="2025-11-01T00:25:33.533012915Z" level=error msg="encountered an error cleaning up failed sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.533653 containerd[1464]: time="2025-11-01T00:25:33.533611968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-4s2pm,Uid:6f8b1313-5d3a-421c-a1c3-861bc7b1da27,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.534380 kubelet[2546]: E1101 00:25:33.534306 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.534380 kubelet[2546]: E1101 00:25:33.534375 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" Nov 1 00:25:33.534577 kubelet[2546]: E1101 00:25:33.534393 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" Nov 1 00:25:33.534577 kubelet[2546]: E1101 00:25:33.534435 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57df9d5c69-4s2pm_calico-apiserver(6f8b1313-5d3a-421c-a1c3-861bc7b1da27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57df9d5c69-4s2pm_calico-apiserver(6f8b1313-5d3a-421c-a1c3-861bc7b1da27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:25:33.547430 containerd[1464]: time="2025-11-01T00:25:33.547288732Z" level=error msg="Failed to destroy network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.547881 containerd[1464]: time="2025-11-01T00:25:33.547856514Z" level=error msg="encountered an error cleaning up failed sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.548040 containerd[1464]: time="2025-11-01T00:25:33.547968076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79458bd765-tc96j,Uid:1449e27d-cfd3-4b57-8ca8-d99ff2c00988,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.548288 kubelet[2546]: E1101 00:25:33.548261 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.548353 kubelet[2546]: E1101 00:25:33.548303 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" Nov 1 00:25:33.548353 kubelet[2546]: E1101 00:25:33.548321 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" Nov 1 00:25:33.548404 kubelet[2546]: E1101 00:25:33.548362 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79458bd765-tc96j_calico-apiserver(1449e27d-cfd3-4b57-8ca8-d99ff2c00988)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79458bd765-tc96j_calico-apiserver(1449e27d-cfd3-4b57-8ca8-d99ff2c00988)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:25:33.578454 containerd[1464]: time="2025-11-01T00:25:33.577993820Z" level=error msg="Failed to destroy network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.578454 containerd[1464]: time="2025-11-01T00:25:33.578358277Z" level=error msg="encountered an error cleaning up failed sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.578454 containerd[1464]: time="2025-11-01T00:25:33.578409388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t79g9,Uid:a394b5b4-84f6-43c3-bf21-09838f083553,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.578990 kubelet[2546]: E1101 00:25:33.578953 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.579155 kubelet[2546]: E1101 00:25:33.579008 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t79g9" Nov 1 00:25:33.579155 kubelet[2546]: E1101 00:25:33.579056 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t79g9" Nov 1 00:25:33.579272 kubelet[2546]: E1101 00:25:33.579143 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-t79g9_calico-system(a394b5b4-84f6-43c3-bf21-09838f083553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-t79g9_calico-system(a394b5b4-84f6-43c3-bf21-09838f083553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:25:33.581412 containerd[1464]: time="2025-11-01T00:25:33.581360627Z" level=error msg="Failed to destroy network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.581919 containerd[1464]: time="2025-11-01T00:25:33.581891778Z" level=error msg="encountered an error cleaning up failed sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.581976 containerd[1464]: time="2025-11-01T00:25:33.581933989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78997778df-lcfj5,Uid:ee70047e-c633-44d7-81f9-abf395209951,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.582190 kubelet[2546]: E1101 00:25:33.582099 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.582190 kubelet[2546]: E1101 00:25:33.582130 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78997778df-lcfj5" Nov 1 00:25:33.582190 kubelet[2546]: E1101 00:25:33.582146 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78997778df-lcfj5" Nov 1 00:25:33.582342 kubelet[2546]: E1101 00:25:33.582179 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78997778df-lcfj5_calico-system(ee70047e-c633-44d7-81f9-abf395209951)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78997778df-lcfj5_calico-system(ee70047e-c633-44d7-81f9-abf395209951)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78997778df-lcfj5" podUID="ee70047e-c633-44d7-81f9-abf395209951" Nov 1 00:25:33.652325 kubelet[2546]: E1101 00:25:33.652277 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:33.653591 containerd[1464]: time="2025-11-01T00:25:33.653290644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k8blm,Uid:5b66f29d-a0c7-459a-a622-8bd163fa7e38,Namespace:kube-system,Attempt:0,}" Nov 1 00:25:33.671074 kubelet[2546]: E1101 00:25:33.670871 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:33.672847 containerd[1464]: time="2025-11-01T00:25:33.672815356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9httv,Uid:b4745334-2dc8-452d-b994-9002bb77af9f,Namespace:kube-system,Attempt:0,}" Nov 1 00:25:33.682926 containerd[1464]: time="2025-11-01T00:25:33.682869998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-r82hw,Uid:7decf862-2dea-422d-a655-b341baeeaa59,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:25:33.754197 containerd[1464]: time="2025-11-01T00:25:33.754118011Z" level=error msg="Failed to destroy network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.755124 containerd[1464]: time="2025-11-01T00:25:33.755100201Z" level=error msg="encountered an error cleaning up failed sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.755340 containerd[1464]: time="2025-11-01T00:25:33.755313135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k8blm,Uid:5b66f29d-a0c7-459a-a622-8bd163fa7e38,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.756084 kubelet[2546]: E1101 00:25:33.755700 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.756084 kubelet[2546]: E1101 00:25:33.755763 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k8blm" Nov 1 00:25:33.756084 kubelet[2546]: E1101 00:25:33.755785 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k8blm" Nov 1 00:25:33.756205 kubelet[2546]: E1101 00:25:33.755842 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-k8blm_kube-system(5b66f29d-a0c7-459a-a622-8bd163fa7e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-k8blm_kube-system(5b66f29d-a0c7-459a-a622-8bd163fa7e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k8blm" podUID="5b66f29d-a0c7-459a-a622-8bd163fa7e38" Nov 1 00:25:33.783660 containerd[1464]: time="2025-11-01T00:25:33.782819138Z" level=error msg="Failed to destroy network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.787653 containerd[1464]: time="2025-11-01T00:25:33.787624556Z" level=error msg="encountered an error cleaning up failed sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.788082 containerd[1464]: time="2025-11-01T00:25:33.787789189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-r82hw,Uid:7decf862-2dea-422d-a655-b341baeeaa59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.788525 kubelet[2546]: E1101 00:25:33.788467 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.788525 kubelet[2546]: E1101 00:25:33.788514 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" Nov 1 00:25:33.788704 kubelet[2546]: E1101 00:25:33.788535 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" Nov 1 00:25:33.788704 kubelet[2546]: E1101 00:25:33.788574 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57df9d5c69-r82hw_calico-apiserver(7decf862-2dea-422d-a655-b341baeeaa59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57df9d5c69-r82hw_calico-apiserver(7decf862-2dea-422d-a655-b341baeeaa59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:25:33.790491 containerd[1464]: time="2025-11-01T00:25:33.790442192Z" level=error msg="Failed to destroy network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.791423 containerd[1464]: time="2025-11-01T00:25:33.791395581Z" level=error msg="encountered an error cleaning up failed sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.791552 containerd[1464]: time="2025-11-01T00:25:33.791528993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9httv,Uid:b4745334-2dc8-452d-b994-9002bb77af9f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.791763 kubelet[2546]: E1101 00:25:33.791721 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.791820 kubelet[2546]: E1101 00:25:33.791765 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9httv" Nov 1 00:25:33.791820 kubelet[2546]: E1101 00:25:33.791786 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9httv" Nov 1 00:25:33.791871 kubelet[2546]: E1101 00:25:33.791824 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9httv_kube-system(b4745334-2dc8-452d-b994-9002bb77af9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9httv_kube-system(b4745334-2dc8-452d-b994-9002bb77af9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9httv" podUID="b4745334-2dc8-452d-b994-9002bb77af9f" Nov 1 00:25:33.925695 systemd[1]: Created slice kubepods-besteffort-pod12cca151_8712_4604_9035_7f2e07caab0c.slice - libcontainer container kubepods-besteffort-pod12cca151_8712_4604_9035_7f2e07caab0c.slice. Nov 1 00:25:33.931315 containerd[1464]: time="2025-11-01T00:25:33.930949027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8hdgb,Uid:12cca151-8712-4604-9035-7f2e07caab0c,Namespace:calico-system,Attempt:0,}" Nov 1 00:25:33.992166 containerd[1464]: time="2025-11-01T00:25:33.992125658Z" level=error msg="Failed to destroy network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.992559 containerd[1464]: time="2025-11-01T00:25:33.992529326Z" level=error msg="encountered an error cleaning up failed sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.992678 containerd[1464]: time="2025-11-01T00:25:33.992648518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8hdgb,Uid:12cca151-8712-4604-9035-7f2e07caab0c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.993180 kubelet[2546]: E1101 00:25:33.993144 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:33.993518 kubelet[2546]: E1101 00:25:33.993203 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8hdgb" Nov 1 00:25:33.993518 kubelet[2546]: E1101 00:25:33.993224 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8hdgb" Nov 1 00:25:33.993518 kubelet[2546]: E1101 00:25:33.993272 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:34.054462 kubelet[2546]: E1101 00:25:34.053358 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:34.055878 kubelet[2546]: I1101 00:25:34.055596 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:25:34.056583 containerd[1464]: time="2025-11-01T00:25:34.056131751Z" level=info msg="StopPodSandbox for \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\"" Nov 1 00:25:34.056583 containerd[1464]: time="2025-11-01T00:25:34.056274924Z" level=info msg="Ensure that sandbox 46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b in task-service has been cleanup successfully" Nov 1 00:25:34.056841 containerd[1464]: time="2025-11-01T00:25:34.056815164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:25:34.060008 kubelet[2546]: I1101 00:25:34.059989 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:25:34.062495 containerd[1464]: time="2025-11-01T00:25:34.062406870Z" level=info msg="StopPodSandbox for \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\"" Nov 1 00:25:34.067063 containerd[1464]: time="2025-11-01T00:25:34.064874726Z" level=info msg="Ensure that sandbox 52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873 in task-service has been cleanup successfully" Nov 1 00:25:34.067238 kubelet[2546]: I1101 00:25:34.067217 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:25:34.071568 containerd[1464]: time="2025-11-01T00:25:34.071536923Z" level=info msg="StopPodSandbox for \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\"" Nov 1 00:25:34.071721 containerd[1464]: time="2025-11-01T00:25:34.071693086Z" level=info msg="Ensure that sandbox 5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa in task-service has been cleanup successfully" Nov 1 00:25:34.078420 kubelet[2546]: I1101 00:25:34.078378 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:25:34.079864 containerd[1464]: time="2025-11-01T00:25:34.078820971Z" level=info msg="StopPodSandbox for \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\"" Nov 1 00:25:34.083164 containerd[1464]: time="2025-11-01T00:25:34.083078842Z" level=info msg="Ensure that sandbox db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c in task-service has been cleanup successfully" Nov 1 00:25:34.089087 kubelet[2546]: I1101 00:25:34.088900 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:25:34.091247 containerd[1464]: time="2025-11-01T00:25:34.091224846Z" level=info msg="StopPodSandbox for \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\"" Nov 1 00:25:34.091693 containerd[1464]: time="2025-11-01T00:25:34.091671925Z" level=info msg="Ensure that sandbox 384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a in task-service has been cleanup successfully" Nov 1 00:25:34.102131 kubelet[2546]: I1101 00:25:34.100838 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:25:34.102484 containerd[1464]: time="2025-11-01T00:25:34.102461569Z" level=info msg="StopPodSandbox for \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\"" Nov 1 00:25:34.105007 kubelet[2546]: I1101 00:25:34.104989 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:25:34.108001 containerd[1464]: time="2025-11-01T00:25:34.107977913Z" level=info msg="Ensure that sandbox 2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b in task-service has been cleanup successfully" Nov 1 00:25:34.110278 containerd[1464]: time="2025-11-01T00:25:34.110256056Z" level=info msg="StopPodSandbox for \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\"" Nov 1 00:25:34.112597 containerd[1464]: time="2025-11-01T00:25:34.112557480Z" level=info msg="Ensure that sandbox 7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788 in task-service has been cleanup successfully" Nov 1 00:25:34.123417 kubelet[2546]: I1101 00:25:34.123395 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:25:34.128413 containerd[1464]: time="2025-11-01T00:25:34.128107825Z" level=info msg="StopPodSandbox for \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\"" Nov 1 00:25:34.128413 containerd[1464]: time="2025-11-01T00:25:34.128236887Z" level=info msg="Ensure that sandbox cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104 in task-service has been cleanup successfully" Nov 1 00:25:34.147347 kubelet[2546]: I1101 00:25:34.147325 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:25:34.148306 containerd[1464]: time="2025-11-01T00:25:34.147955761Z" level=info msg="StopPodSandbox for \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\"" Nov 1 00:25:34.148306 containerd[1464]: time="2025-11-01T00:25:34.148123334Z" level=info msg="Ensure that sandbox 83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7 in task-service has been cleanup successfully" Nov 1 00:25:34.195765 containerd[1464]: time="2025-11-01T00:25:34.195707076Z" level=error msg="StopPodSandbox for \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\" failed" error="failed to destroy network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.196392 kubelet[2546]: E1101 00:25:34.196169 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:25:34.196392 kubelet[2546]: E1101 00:25:34.196224 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b"} Nov 1 00:25:34.196392 kubelet[2546]: E1101 00:25:34.196271 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4745334-2dc8-452d-b994-9002bb77af9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.196392 kubelet[2546]: E1101 00:25:34.196355 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4745334-2dc8-452d-b994-9002bb77af9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9httv" podUID="b4745334-2dc8-452d-b994-9002bb77af9f" Nov 1 00:25:34.216499 containerd[1464]: time="2025-11-01T00:25:34.216313085Z" level=error msg="StopPodSandbox for \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\" failed" error="failed to destroy network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.216582 kubelet[2546]: E1101 00:25:34.216504 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:25:34.216582 kubelet[2546]: E1101 00:25:34.216538 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788"} Nov 1 00:25:34.216582 kubelet[2546]: E1101 00:25:34.216566 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7decf862-2dea-422d-a655-b341baeeaa59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.216743 kubelet[2546]: E1101 00:25:34.216589 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7decf862-2dea-422d-a655-b341baeeaa59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:25:34.219078 containerd[1464]: time="2025-11-01T00:25:34.217882816Z" level=error msg="StopPodSandbox for \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\" failed" error="failed to destroy network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.219143 kubelet[2546]: E1101 00:25:34.218063 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:25:34.219143 kubelet[2546]: E1101 00:25:34.218103 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c"} Nov 1 00:25:34.219143 kubelet[2546]: E1101 00:25:34.218129 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee70047e-c633-44d7-81f9-abf395209951\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.219143 kubelet[2546]: E1101 00:25:34.218153 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee70047e-c633-44d7-81f9-abf395209951\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78997778df-lcfj5" podUID="ee70047e-c633-44d7-81f9-abf395209951" Nov 1 00:25:34.223757 containerd[1464]: time="2025-11-01T00:25:34.223703776Z" level=error msg="StopPodSandbox for \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\" failed" error="failed to destroy network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.223941 kubelet[2546]: E1101 00:25:34.223867 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:25:34.224015 kubelet[2546]: E1101 00:25:34.223944 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a"} Nov 1 00:25:34.224015 kubelet[2546]: E1101 00:25:34.223994 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f8b1313-5d3a-421c-a1c3-861bc7b1da27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.224158 kubelet[2546]: E1101 00:25:34.224023 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f8b1313-5d3a-421c-a1c3-861bc7b1da27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:25:34.238788 containerd[1464]: time="2025-11-01T00:25:34.238704430Z" level=error msg="StopPodSandbox for \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\" failed" error="failed to destroy network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.239203 kubelet[2546]: E1101 00:25:34.239171 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:25:34.239446 kubelet[2546]: E1101 00:25:34.239314 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7"} Nov 1 00:25:34.239446 kubelet[2546]: E1101 00:25:34.239382 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c30f1fcc-7cd9-400f-884f-bd1e3091973a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.239722 kubelet[2546]: E1101 00:25:34.239604 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c30f1fcc-7cd9-400f-884f-bd1e3091973a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:25:34.240151 containerd[1464]: time="2025-11-01T00:25:34.240102346Z" level=error msg="StopPodSandbox for \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\" failed" error="failed to destroy network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.240601 kubelet[2546]: E1101 00:25:34.240478 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:25:34.240601 kubelet[2546]: E1101 00:25:34.240533 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa"} Nov 1 00:25:34.240601 kubelet[2546]: E1101 00:25:34.240553 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12cca151-8712-4604-9035-7f2e07caab0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.240601 kubelet[2546]: E1101 00:25:34.240575 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12cca151-8712-4604-9035-7f2e07caab0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:34.250973 containerd[1464]: time="2025-11-01T00:25:34.250925631Z" level=error msg="StopPodSandbox for \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\" failed" error="failed to destroy network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.252243 kubelet[2546]: E1101 00:25:34.252176 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:25:34.252358 kubelet[2546]: E1101 00:25:34.252254 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873"} Nov 1 00:25:34.252358 kubelet[2546]: E1101 00:25:34.252283 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b66f29d-a0c7-459a-a622-8bd163fa7e38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.252358 kubelet[2546]: E1101 00:25:34.252308 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b66f29d-a0c7-459a-a622-8bd163fa7e38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k8blm" podUID="5b66f29d-a0c7-459a-a622-8bd163fa7e38" Nov 1 00:25:34.256906 containerd[1464]: time="2025-11-01T00:25:34.256852924Z" level=error msg="StopPodSandbox for \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\" failed" error="failed to destroy network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.257058 kubelet[2546]: E1101 00:25:34.257005 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:25:34.257153 kubelet[2546]: E1101 00:25:34.257107 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104"} Nov 1 00:25:34.257153 kubelet[2546]: E1101 00:25:34.257145 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a394b5b4-84f6-43c3-bf21-09838f083553\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.257315 kubelet[2546]: E1101 00:25:34.257167 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a394b5b4-84f6-43c3-bf21-09838f083553\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:25:34.258830 containerd[1464]: time="2025-11-01T00:25:34.258796081Z" level=error msg="StopPodSandbox for \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\" failed" error="failed to destroy network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:25:34.258995 kubelet[2546]: E1101 00:25:34.258947 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:25:34.258995 kubelet[2546]: E1101 00:25:34.258984 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b"} Nov 1 00:25:34.259117 kubelet[2546]: E1101 00:25:34.259009 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1449e27d-cfd3-4b57-8ca8-d99ff2c00988\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:25:34.259206 kubelet[2546]: E1101 00:25:34.259128 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1449e27d-cfd3-4b57-8ca8-d99ff2c00988\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:25:38.416592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460105568.mount: Deactivated successfully. Nov 1 00:25:38.449008 containerd[1464]: time="2025-11-01T00:25:38.448951390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:38.450512 containerd[1464]: time="2025-11-01T00:25:38.450382862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:25:38.451417 containerd[1464]: time="2025-11-01T00:25:38.451368237Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:38.454279 containerd[1464]: time="2025-11-01T00:25:38.454231870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:38.455467 containerd[1464]: time="2025-11-01T00:25:38.455333865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.398419719s" Nov 1 00:25:38.455467 containerd[1464]: time="2025-11-01T00:25:38.455372947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:25:38.492402 containerd[1464]: time="2025-11-01T00:25:38.491601779Z" level=info msg="CreateContainer within sandbox \"0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:25:38.515654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812900800.mount: Deactivated successfully. Nov 1 00:25:38.527223 containerd[1464]: time="2025-11-01T00:25:38.527160502Z" level=info msg="CreateContainer within sandbox \"0d328817b06e095a4281eea86a4a05284e28f73662fe7f9e857c35d6d00cbc13\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6592e6c6e12e26eb7ce02ed059b1512b75c37844a8d86b9fb9550abbfd284e14\"" Nov 1 00:25:38.527915 containerd[1464]: time="2025-11-01T00:25:38.527882742Z" level=info msg="StartContainer for \"6592e6c6e12e26eb7ce02ed059b1512b75c37844a8d86b9fb9550abbfd284e14\"" Nov 1 00:25:38.569197 systemd[1]: Started cri-containerd-6592e6c6e12e26eb7ce02ed059b1512b75c37844a8d86b9fb9550abbfd284e14.scope - libcontainer container 6592e6c6e12e26eb7ce02ed059b1512b75c37844a8d86b9fb9550abbfd284e14. Nov 1 00:25:38.608626 containerd[1464]: time="2025-11-01T00:25:38.608459129Z" level=info msg="StartContainer for \"6592e6c6e12e26eb7ce02ed059b1512b75c37844a8d86b9fb9550abbfd284e14\" returns successfully" Nov 1 00:25:38.715632 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:25:38.716436 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:25:38.933441 containerd[1464]: time="2025-11-01T00:25:38.933375495Z" level=info msg="StopPodSandbox for \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\"" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.036 [INFO][3832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.037 [INFO][3832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" iface="eth0" netns="/var/run/netns/cni-68b43677-2b04-6433-d753-3acfe9e7f19f" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.038 [INFO][3832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" iface="eth0" netns="/var/run/netns/cni-68b43677-2b04-6433-d753-3acfe9e7f19f" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.042 [INFO][3832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" iface="eth0" netns="/var/run/netns/cni-68b43677-2b04-6433-d753-3acfe9e7f19f" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.042 [INFO][3832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.042 [INFO][3832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.080 [INFO][3841] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.080 [INFO][3841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.081 [INFO][3841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.090 [WARNING][3841] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.090 [INFO][3841] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.092 [INFO][3841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:39.102258 containerd[1464]: 2025-11-01 00:25:39.098 [INFO][3832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:25:39.102696 containerd[1464]: time="2025-11-01T00:25:39.102510173Z" level=info msg="TearDown network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\" successfully" Nov 1 00:25:39.102696 containerd[1464]: time="2025-11-01T00:25:39.102564334Z" level=info msg="StopPodSandbox for \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\" returns successfully" Nov 1 00:25:39.164433 kubelet[2546]: I1101 00:25:39.163440 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee70047e-c633-44d7-81f9-abf395209951-whisker-ca-bundle\") pod \"ee70047e-c633-44d7-81f9-abf395209951\" (UID: \"ee70047e-c633-44d7-81f9-abf395209951\") " Nov 1 00:25:39.164433 kubelet[2546]: I1101 00:25:39.163478 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k5t2\" (UniqueName: \"kubernetes.io/projected/ee70047e-c633-44d7-81f9-abf395209951-kube-api-access-4k5t2\") pod \"ee70047e-c633-44d7-81f9-abf395209951\" (UID: \"ee70047e-c633-44d7-81f9-abf395209951\") " Nov 1 00:25:39.164433 kubelet[2546]: I1101 00:25:39.163508 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee70047e-c633-44d7-81f9-abf395209951-whisker-backend-key-pair\") pod \"ee70047e-c633-44d7-81f9-abf395209951\" (UID: \"ee70047e-c633-44d7-81f9-abf395209951\") " Nov 1 00:25:39.166323 kubelet[2546]: I1101 00:25:39.165638 2546 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee70047e-c633-44d7-81f9-abf395209951-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ee70047e-c633-44d7-81f9-abf395209951" (UID: "ee70047e-c633-44d7-81f9-abf395209951"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:25:39.174984 kubelet[2546]: I1101 00:25:39.174934 2546 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee70047e-c633-44d7-81f9-abf395209951-kube-api-access-4k5t2" (OuterVolumeSpecName: "kube-api-access-4k5t2") pod "ee70047e-c633-44d7-81f9-abf395209951" (UID: "ee70047e-c633-44d7-81f9-abf395209951"). InnerVolumeSpecName "kube-api-access-4k5t2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:25:39.179196 kubelet[2546]: I1101 00:25:39.179156 2546 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee70047e-c633-44d7-81f9-abf395209951-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ee70047e-c633-44d7-81f9-abf395209951" (UID: "ee70047e-c633-44d7-81f9-abf395209951"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:25:39.184200 kubelet[2546]: E1101 00:25:39.183809 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:39.192867 systemd[1]: Removed slice kubepods-besteffort-podee70047e_c633_44d7_81f9_abf395209951.slice - libcontainer container kubepods-besteffort-podee70047e_c633_44d7_81f9_abf395209951.slice. Nov 1 00:25:39.239364 kubelet[2546]: I1101 00:25:39.239292 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tp95l" podStartSLOduration=1.472424341 podStartE2EDuration="13.239279726s" podCreationTimestamp="2025-11-01 00:25:26 +0000 UTC" firstStartedPulling="2025-11-01 00:25:26.690067545 +0000 UTC m=+20.881005630" lastFinishedPulling="2025-11-01 00:25:38.45692292 +0000 UTC m=+32.647861015" observedRunningTime="2025-11-01 00:25:39.237312129 +0000 UTC m=+33.428250234" watchObservedRunningTime="2025-11-01 00:25:39.239279726 +0000 UTC m=+33.430217821" Nov 1 00:25:39.264329 kubelet[2546]: I1101 00:25:39.264273 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee70047e-c633-44d7-81f9-abf395209951-whisker-backend-key-pair\") on node \"172-234-26-141\" DevicePath \"\"" Nov 1 00:25:39.264329 kubelet[2546]: I1101 00:25:39.264313 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee70047e-c633-44d7-81f9-abf395209951-whisker-ca-bundle\") on node \"172-234-26-141\" DevicePath \"\"" Nov 1 00:25:39.264329 kubelet[2546]: I1101 00:25:39.264334 2546 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4k5t2\" (UniqueName: \"kubernetes.io/projected/ee70047e-c633-44d7-81f9-abf395209951-kube-api-access-4k5t2\") on node \"172-234-26-141\" DevicePath \"\"" Nov 1 00:25:39.310903 systemd[1]: Created slice kubepods-besteffort-pod009577cc_d930_45a6_aee8_0f7207b1b9a8.slice - libcontainer container kubepods-besteffort-pod009577cc_d930_45a6_aee8_0f7207b1b9a8.slice. Nov 1 00:25:39.364976 kubelet[2546]: I1101 00:25:39.364643 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009577cc-d930-45a6-aee8-0f7207b1b9a8-whisker-ca-bundle\") pod \"whisker-6fd7bd9949-qt64t\" (UID: \"009577cc-d930-45a6-aee8-0f7207b1b9a8\") " pod="calico-system/whisker-6fd7bd9949-qt64t" Nov 1 00:25:39.364976 kubelet[2546]: I1101 00:25:39.364885 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c79d9\" (UniqueName: \"kubernetes.io/projected/009577cc-d930-45a6-aee8-0f7207b1b9a8-kube-api-access-c79d9\") pod \"whisker-6fd7bd9949-qt64t\" (UID: \"009577cc-d930-45a6-aee8-0f7207b1b9a8\") " pod="calico-system/whisker-6fd7bd9949-qt64t" Nov 1 00:25:39.364976 kubelet[2546]: I1101 00:25:39.364920 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/009577cc-d930-45a6-aee8-0f7207b1b9a8-whisker-backend-key-pair\") pod \"whisker-6fd7bd9949-qt64t\" (UID: \"009577cc-d930-45a6-aee8-0f7207b1b9a8\") " pod="calico-system/whisker-6fd7bd9949-qt64t" Nov 1 00:25:39.417250 systemd[1]: run-netns-cni\x2d68b43677\x2d2b04\x2d6433\x2dd753\x2d3acfe9e7f19f.mount: Deactivated successfully. Nov 1 00:25:39.417374 systemd[1]: var-lib-kubelet-pods-ee70047e\x2dc633\x2d44d7\x2d81f9\x2dabf395209951-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:25:39.417464 systemd[1]: var-lib-kubelet-pods-ee70047e\x2dc633\x2d44d7\x2d81f9\x2dabf395209951-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4k5t2.mount: Deactivated successfully. Nov 1 00:25:39.618817 containerd[1464]: time="2025-11-01T00:25:39.618754820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd7bd9949-qt64t,Uid:009577cc-d930-45a6-aee8-0f7207b1b9a8,Namespace:calico-system,Attempt:0,}" Nov 1 00:25:39.737725 systemd-networkd[1380]: cali60035a4456d: Link UP Nov 1 00:25:39.739284 systemd-networkd[1380]: cali60035a4456d: Gained carrier Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.656 [INFO][3864] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.668 [INFO][3864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0 whisker-6fd7bd9949- calico-system 009577cc-d930-45a6-aee8-0f7207b1b9a8 922 0 2025-11-01 00:25:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fd7bd9949 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-26-141 whisker-6fd7bd9949-qt64t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali60035a4456d [] [] }} ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.668 [INFO][3864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.694 [INFO][3876] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" HandleID="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Workload="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.694 [INFO][3876] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" HandleID="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Workload="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-26-141", "pod":"whisker-6fd7bd9949-qt64t", "timestamp":"2025-11-01 00:25:39.694291919 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.694 [INFO][3876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.694 [INFO][3876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.694 [INFO][3876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.700 [INFO][3876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.705 [INFO][3876] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.709 [INFO][3876] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.711 [INFO][3876] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.714 [INFO][3876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.714 [INFO][3876] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.715 [INFO][3876] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9 Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.719 [INFO][3876] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.725 [INFO][3876] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.65/26] block=192.168.127.64/26 handle="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.725 [INFO][3876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.65/26] handle="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" host="172-234-26-141" Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.725 [INFO][3876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:39.756274 containerd[1464]: 2025-11-01 00:25:39.725 [INFO][3876] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.65/26] IPv6=[] ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" HandleID="k8s-pod-network.41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Workload="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" Nov 1 00:25:39.757057 containerd[1464]: 2025-11-01 00:25:39.727 [INFO][3864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0", GenerateName:"whisker-6fd7bd9949-", Namespace:"calico-system", SelfLink:"", UID:"009577cc-d930-45a6-aee8-0f7207b1b9a8", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd7bd9949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"whisker-6fd7bd9949-qt64t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali60035a4456d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:39.757057 containerd[1464]: 2025-11-01 00:25:39.728 [INFO][3864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.65/32] ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" Nov 1 00:25:39.757057 containerd[1464]: 2025-11-01 00:25:39.728 [INFO][3864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60035a4456d ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" Nov 1 00:25:39.757057 containerd[1464]: 2025-11-01 00:25:39.738 [INFO][3864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" Nov 1 00:25:39.757057 containerd[1464]: 2025-11-01 00:25:39.739 [INFO][3864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0", GenerateName:"whisker-6fd7bd9949-", Namespace:"calico-system", SelfLink:"", UID:"009577cc-d930-45a6-aee8-0f7207b1b9a8", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd7bd9949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9", Pod:"whisker-6fd7bd9949-qt64t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali60035a4456d", MAC:"6e:57:3c:57:f6:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:39.757057 containerd[1464]: 2025-11-01 00:25:39.748 [INFO][3864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9" Namespace="calico-system" Pod="whisker-6fd7bd9949-qt64t" WorkloadEndpoint="172--234--26--141-k8s-whisker--6fd7bd9949--qt64t-eth0" Nov 1 00:25:39.778775 containerd[1464]: time="2025-11-01T00:25:39.778258926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:39.778775 containerd[1464]: time="2025-11-01T00:25:39.778317437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:39.778775 containerd[1464]: time="2025-11-01T00:25:39.778331587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:39.778775 containerd[1464]: time="2025-11-01T00:25:39.778462518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:39.801181 systemd[1]: Started cri-containerd-41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9.scope - libcontainer container 41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9. Nov 1 00:25:39.859283 containerd[1464]: time="2025-11-01T00:25:39.859247120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd7bd9949-qt64t,Uid:009577cc-d930-45a6-aee8-0f7207b1b9a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"41c11644753fa6ca2279083bbe6df5ae46a36ec5144f93bd438ef171108728c9\"" Nov 1 00:25:39.861098 containerd[1464]: time="2025-11-01T00:25:39.861071016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:25:39.920359 kubelet[2546]: I1101 00:25:39.920258 2546 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee70047e-c633-44d7-81f9-abf395209951" path="/var/lib/kubelet/pods/ee70047e-c633-44d7-81f9-abf395209951/volumes" Nov 1 00:25:39.999681 containerd[1464]: time="2025-11-01T00:25:39.999549774Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:40.000364 containerd[1464]: time="2025-11-01T00:25:40.000292164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:25:40.000460 containerd[1464]: time="2025-11-01T00:25:40.000343775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:25:40.000520 kubelet[2546]: E1101 00:25:40.000493 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:25:40.000563 kubelet[2546]: E1101 00:25:40.000529 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:25:40.000612 kubelet[2546]: E1101 00:25:40.000584 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:40.002154 containerd[1464]: time="2025-11-01T00:25:40.001767934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:25:40.129510 containerd[1464]: time="2025-11-01T00:25:40.129453938Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:40.130379 containerd[1464]: time="2025-11-01T00:25:40.130221749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:25:40.130379 containerd[1464]: time="2025-11-01T00:25:40.130274380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:25:40.130493 kubelet[2546]: E1101 00:25:40.130424 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:25:40.130493 kubelet[2546]: E1101 00:25:40.130471 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:25:40.130580 kubelet[2546]: E1101 00:25:40.130521 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:40.130580 kubelet[2546]: E1101 00:25:40.130561 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:25:40.186538 kubelet[2546]: E1101 00:25:40.185767 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:25:40.186538 kubelet[2546]: I1101 00:25:40.185882 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:25:40.187618 kubelet[2546]: E1101 00:25:40.187008 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:40.811151 kernel: bpftool[4054]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:25:41.081471 systemd-networkd[1380]: vxlan.calico: Link UP Nov 1 00:25:41.081504 systemd-networkd[1380]: vxlan.calico: Gained carrier Nov 1 00:25:41.200725 kubelet[2546]: E1101 00:25:41.199526 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:25:41.546510 systemd-networkd[1380]: cali60035a4456d: Gained IPv6LL Nov 1 00:25:42.954284 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Nov 1 00:25:45.921257 containerd[1464]: time="2025-11-01T00:25:45.920764538Z" level=info msg="StopPodSandbox for \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\"" Nov 1 00:25:45.922009 containerd[1464]: time="2025-11-01T00:25:45.921621036Z" level=info msg="StopPodSandbox for \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\"" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:45.997 [INFO][4148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:45.997 [INFO][4148] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" iface="eth0" netns="/var/run/netns/cni-388e8af8-7d80-bd70-a810-abe7cde5a286" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:45.997 [INFO][4148] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" iface="eth0" netns="/var/run/netns/cni-388e8af8-7d80-bd70-a810-abe7cde5a286" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:45.999 [INFO][4148] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" iface="eth0" netns="/var/run/netns/cni-388e8af8-7d80-bd70-a810-abe7cde5a286" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:45.999 [INFO][4148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:45.999 [INFO][4148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:46.036 [INFO][4164] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:46.036 [INFO][4164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:46.036 [INFO][4164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:46.043 [WARNING][4164] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:46.043 [INFO][4164] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:46.044 [INFO][4164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:46.052993 containerd[1464]: 2025-11-01 00:25:46.047 [INFO][4148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:25:46.052993 containerd[1464]: time="2025-11-01T00:25:46.050659621Z" level=info msg="TearDown network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\" successfully" Nov 1 00:25:46.054730 containerd[1464]: time="2025-11-01T00:25:46.052912432Z" level=info msg="StopPodSandbox for \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\" returns successfully" Nov 1 00:25:46.057191 kubelet[2546]: E1101 00:25:46.055961 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:46.056666 systemd[1]: run-netns-cni\x2d388e8af8\x2d7d80\x2dbd70\x2da810\x2dabe7cde5a286.mount: Deactivated successfully. Nov 1 00:25:46.060586 containerd[1464]: time="2025-11-01T00:25:46.060458803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9httv,Uid:b4745334-2dc8-452d-b994-9002bb77af9f,Namespace:kube-system,Attempt:1,}" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:45.990 [INFO][4149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:45.992 [INFO][4149] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" iface="eth0" netns="/var/run/netns/cni-45541a92-37e7-3ea6-e98d-c43e70bd9dec" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:45.992 [INFO][4149] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" iface="eth0" netns="/var/run/netns/cni-45541a92-37e7-3ea6-e98d-c43e70bd9dec" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:45.993 [INFO][4149] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" iface="eth0" netns="/var/run/netns/cni-45541a92-37e7-3ea6-e98d-c43e70bd9dec" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:45.993 [INFO][4149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:45.993 [INFO][4149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:46.039 [INFO][4162] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:46.040 [INFO][4162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:46.044 [INFO][4162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:46.052 [WARNING][4162] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:46.052 [INFO][4162] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:46.057 [INFO][4162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:46.065287 containerd[1464]: 2025-11-01 00:25:46.062 [INFO][4149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:25:46.066391 containerd[1464]: time="2025-11-01T00:25:46.065460472Z" level=info msg="TearDown network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\" successfully" Nov 1 00:25:46.066391 containerd[1464]: time="2025-11-01T00:25:46.065511252Z" level=info msg="StopPodSandbox for \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\" returns successfully" Nov 1 00:25:46.070896 systemd[1]: run-netns-cni\x2d45541a92\x2d37e7\x2d3ea6\x2de98d\x2dc43e70bd9dec.mount: Deactivated successfully. Nov 1 00:25:46.071437 kubelet[2546]: E1101 00:25:46.071329 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:46.074683 containerd[1464]: time="2025-11-01T00:25:46.074421977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k8blm,Uid:5b66f29d-a0c7-459a-a622-8bd163fa7e38,Namespace:kube-system,Attempt:1,}" Nov 1 00:25:46.242000 systemd-networkd[1380]: cali31effdd235b: Link UP Nov 1 00:25:46.242453 systemd-networkd[1380]: cali31effdd235b: Gained carrier Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.154 [INFO][4176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0 coredns-66bc5c9577- kube-system b4745334-2dc8-452d-b994-9002bb77af9f 968 0 2025-11-01 00:25:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-26-141 coredns-66bc5c9577-9httv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali31effdd235b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.154 [INFO][4176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.196 [INFO][4199] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" HandleID="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.197 [INFO][4199] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" HandleID="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-26-141", "pod":"coredns-66bc5c9577-9httv", "timestamp":"2025-11-01 00:25:46.196815495 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.197 [INFO][4199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.197 [INFO][4199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.197 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.205 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.210 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.214 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.216 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.218 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.218 [INFO][4199] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.219 [INFO][4199] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.223 [INFO][4199] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.229 [INFO][4199] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.66/26] block=192.168.127.64/26 handle="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.229 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.66/26] handle="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" host="172-234-26-141" Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.229 [INFO][4199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:46.268441 containerd[1464]: 2025-11-01 00:25:46.229 [INFO][4199] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.66/26] IPv6=[] ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" HandleID="k8s-pod-network.86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.269300 containerd[1464]: 2025-11-01 00:25:46.233 [INFO][4176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4745334-2dc8-452d-b994-9002bb77af9f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"coredns-66bc5c9577-9httv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31effdd235b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:46.269300 containerd[1464]: 2025-11-01 00:25:46.234 [INFO][4176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.66/32] ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.269300 containerd[1464]: 2025-11-01 00:25:46.234 [INFO][4176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31effdd235b ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.269300 containerd[1464]: 2025-11-01 00:25:46.245 [INFO][4176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.269300 containerd[1464]: 2025-11-01 00:25:46.246 [INFO][4176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4745334-2dc8-452d-b994-9002bb77af9f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e", Pod:"coredns-66bc5c9577-9httv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31effdd235b", MAC:"e6:d5:4f:cd:d4:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:46.269300 containerd[1464]: 2025-11-01 00:25:46.261 [INFO][4176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e" Namespace="kube-system" Pod="coredns-66bc5c9577-9httv" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:25:46.297489 containerd[1464]: time="2025-11-01T00:25:46.297387165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:46.297489 containerd[1464]: time="2025-11-01T00:25:46.297439695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:46.297489 containerd[1464]: time="2025-11-01T00:25:46.297453115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:46.299194 containerd[1464]: time="2025-11-01T00:25:46.297530856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:46.325405 systemd[1]: Started cri-containerd-86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e.scope - libcontainer container 86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e. Nov 1 00:25:46.360751 systemd-networkd[1380]: califda2a979fa0: Link UP Nov 1 00:25:46.363224 systemd-networkd[1380]: califda2a979fa0: Gained carrier Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.169 [INFO][4189] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0 coredns-66bc5c9577- kube-system 5b66f29d-a0c7-459a-a622-8bd163fa7e38 967 0 2025-11-01 00:25:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-26-141 coredns-66bc5c9577-k8blm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califda2a979fa0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.169 [INFO][4189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.201 [INFO][4205] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" HandleID="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.201 [INFO][4205] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" HandleID="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-26-141", "pod":"coredns-66bc5c9577-k8blm", "timestamp":"2025-11-01 00:25:46.201444379 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.201 [INFO][4205] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.229 [INFO][4205] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.229 [INFO][4205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.307 [INFO][4205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.317 [INFO][4205] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.323 [INFO][4205] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.326 [INFO][4205] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.330 [INFO][4205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.330 [INFO][4205] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.333 [INFO][4205] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89 Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.339 [INFO][4205] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.347 [INFO][4205] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.67/26] block=192.168.127.64/26 handle="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.347 [INFO][4205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.67/26] handle="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" host="172-234-26-141" Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.347 [INFO][4205] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:46.392323 containerd[1464]: 2025-11-01 00:25:46.347 [INFO][4205] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.67/26] IPv6=[] ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" HandleID="k8s-pod-network.a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.392958 containerd[1464]: 2025-11-01 00:25:46.352 [INFO][4189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5b66f29d-a0c7-459a-a622-8bd163fa7e38", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"coredns-66bc5c9577-k8blm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califda2a979fa0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:46.392958 containerd[1464]: 2025-11-01 00:25:46.353 [INFO][4189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.67/32] ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.392958 containerd[1464]: 2025-11-01 00:25:46.353 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califda2a979fa0 ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.392958 containerd[1464]: 2025-11-01 00:25:46.364 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.392958 containerd[1464]: 2025-11-01 00:25:46.365 [INFO][4189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5b66f29d-a0c7-459a-a622-8bd163fa7e38", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89", Pod:"coredns-66bc5c9577-k8blm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califda2a979fa0", MAC:"fe:9b:d0:03:98:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:46.392958 containerd[1464]: 2025-11-01 00:25:46.385 [INFO][4189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89" Namespace="kube-system" Pod="coredns-66bc5c9577-k8blm" WorkloadEndpoint="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:25:46.401707 containerd[1464]: time="2025-11-01T00:25:46.401563788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9httv,Uid:b4745334-2dc8-452d-b994-9002bb77af9f,Namespace:kube-system,Attempt:1,} returns sandbox id \"86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e\"" Nov 1 00:25:46.402701 kubelet[2546]: E1101 00:25:46.402672 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:46.410524 containerd[1464]: time="2025-11-01T00:25:46.410458413Z" level=info msg="CreateContainer within sandbox \"86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:25:46.429241 containerd[1464]: time="2025-11-01T00:25:46.429201252Z" level=info msg="CreateContainer within sandbox \"86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eaec1f1a4f50f8a938e689af77777ddf7e2ccdfc58dd5ac88acec0391ce34c83\"" Nov 1 00:25:46.430494 containerd[1464]: time="2025-11-01T00:25:46.430442114Z" level=info msg="StartContainer for \"eaec1f1a4f50f8a938e689af77777ddf7e2ccdfc58dd5ac88acec0391ce34c83\"" Nov 1 00:25:46.440009 containerd[1464]: time="2025-11-01T00:25:46.439935634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:46.441835 containerd[1464]: time="2025-11-01T00:25:46.440785153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:46.442067 containerd[1464]: time="2025-11-01T00:25:46.441904983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:46.442067 containerd[1464]: time="2025-11-01T00:25:46.441992014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:46.466751 systemd[1]: Started cri-containerd-a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89.scope - libcontainer container a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89. Nov 1 00:25:46.500294 systemd[1]: Started cri-containerd-eaec1f1a4f50f8a938e689af77777ddf7e2ccdfc58dd5ac88acec0391ce34c83.scope - libcontainer container eaec1f1a4f50f8a938e689af77777ddf7e2ccdfc58dd5ac88acec0391ce34c83. Nov 1 00:25:46.558363 containerd[1464]: time="2025-11-01T00:25:46.557793989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k8blm,Uid:5b66f29d-a0c7-459a-a622-8bd163fa7e38,Namespace:kube-system,Attempt:1,} returns sandbox id \"a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89\"" Nov 1 00:25:46.559398 kubelet[2546]: E1101 00:25:46.559333 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:46.561742 containerd[1464]: time="2025-11-01T00:25:46.561704907Z" level=info msg="StartContainer for \"eaec1f1a4f50f8a938e689af77777ddf7e2ccdfc58dd5ac88acec0391ce34c83\" returns successfully" Nov 1 00:25:46.567688 containerd[1464]: time="2025-11-01T00:25:46.567654023Z" level=info msg="CreateContainer within sandbox \"a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:25:46.582685 containerd[1464]: time="2025-11-01T00:25:46.582213062Z" level=info msg="CreateContainer within sandbox \"a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bc10a09d573540ccffff0b88c9599dd2d53ef40f318a4832d44789c9e3027e9\"" Nov 1 00:25:46.584364 containerd[1464]: time="2025-11-01T00:25:46.584265782Z" level=info msg="StartContainer for \"9bc10a09d573540ccffff0b88c9599dd2d53ef40f318a4832d44789c9e3027e9\"" Nov 1 00:25:46.637344 systemd[1]: Started cri-containerd-9bc10a09d573540ccffff0b88c9599dd2d53ef40f318a4832d44789c9e3027e9.scope - libcontainer container 9bc10a09d573540ccffff0b88c9599dd2d53ef40f318a4832d44789c9e3027e9. Nov 1 00:25:46.685200 containerd[1464]: time="2025-11-01T00:25:46.684502708Z" level=info msg="StartContainer for \"9bc10a09d573540ccffff0b88c9599dd2d53ef40f318a4832d44789c9e3027e9\" returns successfully" Nov 1 00:25:46.919088 containerd[1464]: time="2025-11-01T00:25:46.917812765Z" level=info msg="StopPodSandbox for \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\"" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:46.973 [INFO][4410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:46.974 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" iface="eth0" netns="/var/run/netns/cni-b0a4630d-bc55-d6f7-ca00-1582d7ff5671" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:46.974 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" iface="eth0" netns="/var/run/netns/cni-b0a4630d-bc55-d6f7-ca00-1582d7ff5671" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:46.976 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" iface="eth0" netns="/var/run/netns/cni-b0a4630d-bc55-d6f7-ca00-1582d7ff5671" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:46.977 [INFO][4410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:46.978 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:47.015 [INFO][4417] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:47.015 [INFO][4417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:47.015 [INFO][4417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:47.021 [WARNING][4417] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:47.021 [INFO][4417] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:47.022 [INFO][4417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:47.027703 containerd[1464]: 2025-11-01 00:25:47.025 [INFO][4410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:25:47.028559 containerd[1464]: time="2025-11-01T00:25:47.027894482Z" level=info msg="TearDown network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\" successfully" Nov 1 00:25:47.028559 containerd[1464]: time="2025-11-01T00:25:47.027958782Z" level=info msg="StopPodSandbox for \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\" returns successfully" Nov 1 00:25:47.030414 containerd[1464]: time="2025-11-01T00:25:47.030076622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t79g9,Uid:a394b5b4-84f6-43c3-bf21-09838f083553,Namespace:calico-system,Attempt:1,}" Nov 1 00:25:47.065189 systemd[1]: run-netns-cni\x2db0a4630d\x2dbc55\x2dd6f7\x2dca00\x2d1582d7ff5671.mount: Deactivated successfully. Nov 1 00:25:47.160120 systemd-networkd[1380]: cali8b150de3ec9: Link UP Nov 1 00:25:47.160419 systemd-networkd[1380]: cali8b150de3ec9: Gained carrier Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.087 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0 goldmane-7c778bb748- calico-system a394b5b4-84f6-43c3-bf21-09838f083553 988 0 2025-11-01 00:25:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-26-141 goldmane-7c778bb748-t79g9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8b150de3ec9 [] [] }} ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.087 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.120 [INFO][4436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" HandleID="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.120 [INFO][4436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" HandleID="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c3640), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-26-141", "pod":"goldmane-7c778bb748-t79g9", "timestamp":"2025-11-01 00:25:47.120234147 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.120 [INFO][4436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.120 [INFO][4436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.120 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.128 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.133 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.139 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.141 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.143 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.143 [INFO][4436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.144 [INFO][4436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8 Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.147 [INFO][4436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.153 [INFO][4436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.68/26] block=192.168.127.64/26 handle="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.153 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.68/26] handle="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" host="172-234-26-141" Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.153 [INFO][4436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:47.181927 containerd[1464]: 2025-11-01 00:25:47.153 [INFO][4436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.68/26] IPv6=[] ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" HandleID="k8s-pod-network.78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.182566 containerd[1464]: 2025-11-01 00:25:47.156 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a394b5b4-84f6-43c3-bf21-09838f083553", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"goldmane-7c778bb748-t79g9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b150de3ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:47.182566 containerd[1464]: 2025-11-01 00:25:47.156 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.68/32] ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.182566 containerd[1464]: 2025-11-01 00:25:47.156 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b150de3ec9 ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.182566 containerd[1464]: 2025-11-01 00:25:47.162 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.182566 containerd[1464]: 2025-11-01 00:25:47.164 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a394b5b4-84f6-43c3-bf21-09838f083553", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8", Pod:"goldmane-7c778bb748-t79g9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b150de3ec9", MAC:"36:3f:74:be:4b:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:47.182566 containerd[1464]: 2025-11-01 00:25:47.178 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8" Namespace="calico-system" Pod="goldmane-7c778bb748-t79g9" WorkloadEndpoint="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:25:47.207912 kubelet[2546]: E1101 00:25:47.207244 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:47.213623 containerd[1464]: time="2025-11-01T00:25:47.210689664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:47.213623 containerd[1464]: time="2025-11-01T00:25:47.210739674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:47.213623 containerd[1464]: time="2025-11-01T00:25:47.210756875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:47.213623 containerd[1464]: time="2025-11-01T00:25:47.211018407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:47.219360 kubelet[2546]: E1101 00:25:47.218755 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:47.249086 kubelet[2546]: I1101 00:25:47.248795 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9httv" podStartSLOduration=35.248780299 podStartE2EDuration="35.248780299s" podCreationTimestamp="2025-11-01 00:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:47.236109694 +0000 UTC m=+41.427047809" watchObservedRunningTime="2025-11-01 00:25:47.248780299 +0000 UTC m=+41.439718384" Nov 1 00:25:47.271334 systemd[1]: Started cri-containerd-78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8.scope - libcontainer container 78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8. Nov 1 00:25:47.285052 kubelet[2546]: I1101 00:25:47.282385 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k8blm" podStartSLOduration=35.282369932 podStartE2EDuration="35.282369932s" podCreationTimestamp="2025-11-01 00:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:47.254061276 +0000 UTC m=+41.444999371" watchObservedRunningTime="2025-11-01 00:25:47.282369932 +0000 UTC m=+41.473308017" Nov 1 00:25:47.356053 containerd[1464]: time="2025-11-01T00:25:47.355975257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t79g9,Uid:a394b5b4-84f6-43c3-bf21-09838f083553,Namespace:calico-system,Attempt:1,} returns sandbox id \"78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8\"" Nov 1 00:25:47.359607 containerd[1464]: time="2025-11-01T00:25:47.359366778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:25:47.507902 containerd[1464]: time="2025-11-01T00:25:47.507739420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:47.508749 containerd[1464]: time="2025-11-01T00:25:47.508685138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:25:47.508988 containerd[1464]: time="2025-11-01T00:25:47.508715938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:47.509260 kubelet[2546]: E1101 00:25:47.509200 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:47.509337 kubelet[2546]: E1101 00:25:47.509302 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:47.509473 kubelet[2546]: E1101 00:25:47.509440 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t79g9_calico-system(a394b5b4-84f6-43c3-bf21-09838f083553): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:47.509874 kubelet[2546]: E1101 00:25:47.509531 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:25:47.562382 systemd-networkd[1380]: califda2a979fa0: Gained IPv6LL Nov 1 00:25:47.818816 systemd-networkd[1380]: cali31effdd235b: Gained IPv6LL Nov 1 00:25:47.919629 containerd[1464]: time="2025-11-01T00:25:47.919362290Z" level=info msg="StopPodSandbox for \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\"" Nov 1 00:25:47.920005 containerd[1464]: time="2025-11-01T00:25:47.919979505Z" level=info msg="StopPodSandbox for \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\"" Nov 1 00:25:47.922877 containerd[1464]: time="2025-11-01T00:25:47.922575789Z" level=info msg="StopPodSandbox for \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\"" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.011 [INFO][4520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.012 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" iface="eth0" netns="/var/run/netns/cni-62d1fe6b-f900-df20-d7d5-a938bfa54fc9" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.012 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" iface="eth0" netns="/var/run/netns/cni-62d1fe6b-f900-df20-d7d5-a938bfa54fc9" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.012 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" iface="eth0" netns="/var/run/netns/cni-62d1fe6b-f900-df20-d7d5-a938bfa54fc9" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.013 [INFO][4520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.013 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.065 [INFO][4542] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.066 [INFO][4542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.066 [INFO][4542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.073 [WARNING][4542] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.073 [INFO][4542] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.074 [INFO][4542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:48.078958 containerd[1464]: 2025-11-01 00:25:48.077 [INFO][4520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:25:48.083739 containerd[1464]: time="2025-11-01T00:25:48.083688087Z" level=info msg="TearDown network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\" successfully" Nov 1 00:25:48.083739 containerd[1464]: time="2025-11-01T00:25:48.083721657Z" level=info msg="StopPodSandbox for \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\" returns successfully" Nov 1 00:25:48.086533 systemd[1]: run-netns-cni\x2d62d1fe6b\x2df900\x2ddf20\x2dd7d5\x2da938bfa54fc9.mount: Deactivated successfully. Nov 1 00:25:48.090789 containerd[1464]: time="2025-11-01T00:25:48.089817009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-r82hw,Uid:7decf862-2dea-422d-a655-b341baeeaa59,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.016 [INFO][4518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.017 [INFO][4518] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" iface="eth0" netns="/var/run/netns/cni-47cb575d-5a1e-5df7-a26e-bb2df5550a8d" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.018 [INFO][4518] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" iface="eth0" netns="/var/run/netns/cni-47cb575d-5a1e-5df7-a26e-bb2df5550a8d" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.018 [INFO][4518] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" iface="eth0" netns="/var/run/netns/cni-47cb575d-5a1e-5df7-a26e-bb2df5550a8d" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.018 [INFO][4518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.018 [INFO][4518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.067 [INFO][4547] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.069 [INFO][4547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.074 [INFO][4547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.083 [WARNING][4547] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.083 [INFO][4547] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.085 [INFO][4547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:48.094183 containerd[1464]: 2025-11-01 00:25:48.091 [INFO][4518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:25:48.098194 containerd[1464]: time="2025-11-01T00:25:48.098125740Z" level=info msg="TearDown network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\" successfully" Nov 1 00:25:48.098194 containerd[1464]: time="2025-11-01T00:25:48.098150321Z" level=info msg="StopPodSandbox for \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\" returns successfully" Nov 1 00:25:48.101933 containerd[1464]: time="2025-11-01T00:25:48.101871903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-4s2pm,Uid:6f8b1313-5d3a-421c-a1c3-861bc7b1da27,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:25:48.102060 systemd[1]: run-netns-cni\x2d47cb575d\x2d5a1e\x2d5df7\x2da26e\x2dbb2df5550a8d.mount: Deactivated successfully. Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.025 [INFO][4528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.026 [INFO][4528] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" iface="eth0" netns="/var/run/netns/cni-1ea957d4-ea48-3059-0c08-71a22b382c2f" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.027 [INFO][4528] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" iface="eth0" netns="/var/run/netns/cni-1ea957d4-ea48-3059-0c08-71a22b382c2f" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.027 [INFO][4528] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" iface="eth0" netns="/var/run/netns/cni-1ea957d4-ea48-3059-0c08-71a22b382c2f" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.028 [INFO][4528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.028 [INFO][4528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.070 [INFO][4552] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.071 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.087 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.097 [WARNING][4552] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.097 [INFO][4552] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.099 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:48.106340 containerd[1464]: 2025-11-01 00:25:48.103 [INFO][4528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:25:48.107218 containerd[1464]: time="2025-11-01T00:25:48.107044657Z" level=info msg="TearDown network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\" successfully" Nov 1 00:25:48.107218 containerd[1464]: time="2025-11-01T00:25:48.107068157Z" level=info msg="StopPodSandbox for \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\" returns successfully" Nov 1 00:25:48.112210 containerd[1464]: time="2025-11-01T00:25:48.112165731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79458bd765-tc96j,Uid:1449e27d-cfd3-4b57-8ca8-d99ff2c00988,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:25:48.112948 systemd[1]: run-netns-cni\x2d1ea957d4\x2dea48\x2d3059\x2d0c08\x2d71a22b382c2f.mount: Deactivated successfully. Nov 1 00:25:48.224957 kubelet[2546]: E1101 00:25:48.224297 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:48.224957 kubelet[2546]: E1101 00:25:48.224369 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:48.228478 kubelet[2546]: E1101 00:25:48.226734 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:25:48.333079 systemd-networkd[1380]: cali8b150de3ec9: Gained IPv6LL Nov 1 00:25:48.337270 systemd-networkd[1380]: cali42090c753a3: Link UP Nov 1 00:25:48.337673 systemd-networkd[1380]: cali42090c753a3: Gained carrier Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.194 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0 calico-apiserver-57df9d5c69- calico-apiserver 7decf862-2dea-422d-a655-b341baeeaa59 1015 0 2025-11-01 00:25:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57df9d5c69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-26-141 calico-apiserver-57df9d5c69-r82hw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali42090c753a3 [] [] }} ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.194 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.272 [INFO][4603] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" HandleID="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.273 [INFO][4603] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" HandleID="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-26-141", "pod":"calico-apiserver-57df9d5c69-r82hw", "timestamp":"2025-11-01 00:25:48.272871558 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.273 [INFO][4603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.273 [INFO][4603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.273 [INFO][4603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.287 [INFO][4603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.296 [INFO][4603] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.303 [INFO][4603] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.306 [INFO][4603] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.308 [INFO][4603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.308 [INFO][4603] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.309 [INFO][4603] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80 Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.314 [INFO][4603] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.319 [INFO][4603] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.69/26] block=192.168.127.64/26 handle="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.319 [INFO][4603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.69/26] handle="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" host="172-234-26-141" Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.319 [INFO][4603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:48.358530 containerd[1464]: 2025-11-01 00:25:48.319 [INFO][4603] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.69/26] IPv6=[] ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" HandleID="k8s-pod-network.268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.359287 containerd[1464]: 2025-11-01 00:25:48.325 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"7decf862-2dea-422d-a655-b341baeeaa59", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"calico-apiserver-57df9d5c69-r82hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42090c753a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:48.359287 containerd[1464]: 2025-11-01 00:25:48.325 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.69/32] ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.359287 containerd[1464]: 2025-11-01 00:25:48.327 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42090c753a3 ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.359287 containerd[1464]: 2025-11-01 00:25:48.339 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.359287 containerd[1464]: 2025-11-01 00:25:48.340 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"7decf862-2dea-422d-a655-b341baeeaa59", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80", Pod:"calico-apiserver-57df9d5c69-r82hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42090c753a3", MAC:"e6:81:61:bc:ac:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:48.359287 containerd[1464]: 2025-11-01 00:25:48.354 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-r82hw" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:25:48.411682 containerd[1464]: time="2025-11-01T00:25:48.411421905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:48.411916 containerd[1464]: time="2025-11-01T00:25:48.411819829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:48.412142 containerd[1464]: time="2025-11-01T00:25:48.412111871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:48.412740 containerd[1464]: time="2025-11-01T00:25:48.412556815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:48.460382 systemd-networkd[1380]: cali78953af04fb: Link UP Nov 1 00:25:48.461974 systemd-networkd[1380]: cali78953af04fb: Gained carrier Nov 1 00:25:48.466215 systemd[1]: Started cri-containerd-268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80.scope - libcontainer container 268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80. Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.191 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0 calico-apiserver-57df9d5c69- calico-apiserver 6f8b1313-5d3a-421c-a1c3-861bc7b1da27 1016 0 2025-11-01 00:25:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57df9d5c69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-26-141 calico-apiserver-57df9d5c69-4s2pm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali78953af04fb [] [] }} ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.191 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.275 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" HandleID="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.275 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" HandleID="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-26-141", "pod":"calico-apiserver-57df9d5c69-4s2pm", "timestamp":"2025-11-01 00:25:48.27547881 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.275 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.320 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.320 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.389 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.396 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.408 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.413 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.417 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.417 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.421 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.428 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.442 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.70/26] block=192.168.127.64/26 handle="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.442 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.70/26] handle="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" host="172-234-26-141" Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.442 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:48.482927 containerd[1464]: 2025-11-01 00:25:48.442 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.70/26] IPv6=[] ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" HandleID="k8s-pod-network.99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.484202 containerd[1464]: 2025-11-01 00:25:48.448 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f8b1313-5d3a-421c-a1c3-861bc7b1da27", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"calico-apiserver-57df9d5c69-4s2pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78953af04fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:48.484202 containerd[1464]: 2025-11-01 00:25:48.448 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.70/32] ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.484202 containerd[1464]: 2025-11-01 00:25:48.448 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78953af04fb ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.484202 containerd[1464]: 2025-11-01 00:25:48.464 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.484202 containerd[1464]: 2025-11-01 00:25:48.465 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f8b1313-5d3a-421c-a1c3-861bc7b1da27", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c", Pod:"calico-apiserver-57df9d5c69-4s2pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78953af04fb", MAC:"d6:03:7c:a4:12:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:48.484202 containerd[1464]: 2025-11-01 00:25:48.479 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c" Namespace="calico-apiserver" Pod="calico-apiserver-57df9d5c69-4s2pm" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:25:48.535133 containerd[1464]: time="2025-11-01T00:25:48.533691793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:48.535133 containerd[1464]: time="2025-11-01T00:25:48.534704431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:48.535612 containerd[1464]: time="2025-11-01T00:25:48.535261036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:48.535612 containerd[1464]: time="2025-11-01T00:25:48.535400947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:48.546591 systemd-networkd[1380]: cali78950839dc6: Link UP Nov 1 00:25:48.551478 systemd-networkd[1380]: cali78950839dc6: Gained carrier Nov 1 00:25:48.593721 systemd[1]: Started cri-containerd-99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c.scope - libcontainer container 99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c. Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.201 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0 calico-apiserver-79458bd765- calico-apiserver 1449e27d-cfd3-4b57-8ca8-d99ff2c00988 1017 0 2025-11-01 00:25:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79458bd765 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-26-141 calico-apiserver-79458bd765-tc96j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali78950839dc6 [] [] }} ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.202 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.279 [INFO][4610] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" HandleID="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.279 [INFO][4610] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" HandleID="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-26-141", "pod":"calico-apiserver-79458bd765-tc96j", "timestamp":"2025-11-01 00:25:48.279802927 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.280 [INFO][4610] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.442 [INFO][4610] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.443 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.487 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.498 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.505 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.507 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.511 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.511 [INFO][4610] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.513 [INFO][4610] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.525 [INFO][4610] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.532 [INFO][4610] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.71/26] block=192.168.127.64/26 handle="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.532 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.71/26] handle="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" host="172-234-26-141" Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.532 [INFO][4610] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:48.597379 containerd[1464]: 2025-11-01 00:25:48.532 [INFO][4610] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.71/26] IPv6=[] ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" HandleID="k8s-pod-network.b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.598051 containerd[1464]: 2025-11-01 00:25:48.541 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0", GenerateName:"calico-apiserver-79458bd765-", Namespace:"calico-apiserver", SelfLink:"", UID:"1449e27d-cfd3-4b57-8ca8-d99ff2c00988", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79458bd765", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"calico-apiserver-79458bd765-tc96j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78950839dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:48.598051 containerd[1464]: 2025-11-01 00:25:48.542 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.71/32] ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.598051 containerd[1464]: 2025-11-01 00:25:48.542 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78950839dc6 ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.598051 containerd[1464]: 2025-11-01 00:25:48.555 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.598051 containerd[1464]: 2025-11-01 00:25:48.557 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0", GenerateName:"calico-apiserver-79458bd765-", Namespace:"calico-apiserver", SelfLink:"", UID:"1449e27d-cfd3-4b57-8ca8-d99ff2c00988", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79458bd765", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f", Pod:"calico-apiserver-79458bd765-tc96j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78950839dc6", MAC:"ba:ff:d1:fe:a1:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:48.598051 containerd[1464]: 2025-11-01 00:25:48.589 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f" Namespace="calico-apiserver" Pod="calico-apiserver-79458bd765-tc96j" WorkloadEndpoint="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:25:48.643538 containerd[1464]: time="2025-11-01T00:25:48.643181641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:48.643538 containerd[1464]: time="2025-11-01T00:25:48.643239121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:48.643538 containerd[1464]: time="2025-11-01T00:25:48.643253861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:48.643538 containerd[1464]: time="2025-11-01T00:25:48.643457543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:48.659289 containerd[1464]: time="2025-11-01T00:25:48.659260739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-r82hw,Uid:7decf862-2dea-422d-a655-b341baeeaa59,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80\"" Nov 1 00:25:48.664611 containerd[1464]: time="2025-11-01T00:25:48.664469073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:48.695193 systemd[1]: Started cri-containerd-b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f.scope - libcontainer container b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f. Nov 1 00:25:48.718478 containerd[1464]: time="2025-11-01T00:25:48.718445476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57df9d5c69-4s2pm,Uid:6f8b1313-5d3a-421c-a1c3-861bc7b1da27,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c\"" Nov 1 00:25:48.764093 containerd[1464]: time="2025-11-01T00:25:48.763973876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79458bd765-tc96j,Uid:1449e27d-cfd3-4b57-8ca8-d99ff2c00988,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f\"" Nov 1 00:25:48.807501 containerd[1464]: time="2025-11-01T00:25:48.807461128Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:48.808458 containerd[1464]: time="2025-11-01T00:25:48.808353376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:48.808611 kubelet[2546]: E1101 00:25:48.808583 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:48.808716 kubelet[2546]: E1101 00:25:48.808617 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:48.808806 kubelet[2546]: E1101 00:25:48.808765 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-r82hw_calico-apiserver(7decf862-2dea-422d-a655-b341baeeaa59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:48.808956 containerd[1464]: time="2025-11-01T00:25:48.808396177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:48.809368 containerd[1464]: time="2025-11-01T00:25:48.809062122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:48.809548 kubelet[2546]: E1101 00:25:48.809483 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:25:48.920723 containerd[1464]: time="2025-11-01T00:25:48.920493967Z" level=info msg="StopPodSandbox for \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\"" Nov 1 00:25:48.964405 containerd[1464]: time="2025-11-01T00:25:48.964196301Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:48.965246 containerd[1464]: time="2025-11-01T00:25:48.965217290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:48.965519 containerd[1464]: time="2025-11-01T00:25:48.965327861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:48.966114 kubelet[2546]: E1101 00:25:48.965689 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:48.966114 kubelet[2546]: E1101 00:25:48.965738 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:48.966114 kubelet[2546]: E1101 00:25:48.965926 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-4s2pm_calico-apiserver(6f8b1313-5d3a-421c-a1c3-861bc7b1da27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:48.966114 kubelet[2546]: E1101 00:25:48.965965 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:25:48.966947 containerd[1464]: time="2025-11-01T00:25:48.966555102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:48.999 [INFO][4785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.000 [INFO][4785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" iface="eth0" netns="/var/run/netns/cni-e0105916-7d8f-8c25-3289-1e02f90bf44d" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.000 [INFO][4785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" iface="eth0" netns="/var/run/netns/cni-e0105916-7d8f-8c25-3289-1e02f90bf44d" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.000 [INFO][4785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" iface="eth0" netns="/var/run/netns/cni-e0105916-7d8f-8c25-3289-1e02f90bf44d" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.000 [INFO][4785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.001 [INFO][4785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.039 [INFO][4792] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.039 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.040 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.045 [WARNING][4792] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.045 [INFO][4792] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.046 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:49.052668 containerd[1464]: 2025-11-01 00:25:49.049 [INFO][4785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:25:49.053356 containerd[1464]: time="2025-11-01T00:25:49.053238241Z" level=info msg="TearDown network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\" successfully" Nov 1 00:25:49.053356 containerd[1464]: time="2025-11-01T00:25:49.053264341Z" level=info msg="StopPodSandbox for \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\" returns successfully" Nov 1 00:25:49.055707 containerd[1464]: time="2025-11-01T00:25:49.055648091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8hdgb,Uid:12cca151-8712-4604-9035-7f2e07caab0c,Namespace:calico-system,Attempt:1,}" Nov 1 00:25:49.088877 systemd[1]: run-netns-cni\x2de0105916\x2d7d8f\x2d8c25\x2d3289\x2d1e02f90bf44d.mount: Deactivated successfully. Nov 1 00:25:49.117818 containerd[1464]: time="2025-11-01T00:25:49.117778066Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:49.119710 containerd[1464]: time="2025-11-01T00:25:49.119623391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:49.120015 containerd[1464]: time="2025-11-01T00:25:49.119769912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:49.120409 kubelet[2546]: E1101 00:25:49.120353 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:49.120474 kubelet[2546]: E1101 00:25:49.120423 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:49.121065 kubelet[2546]: E1101 00:25:49.120553 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79458bd765-tc96j_calico-apiserver(1449e27d-cfd3-4b57-8ca8-d99ff2c00988): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:49.121065 kubelet[2546]: E1101 00:25:49.120590 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:25:49.184741 systemd-networkd[1380]: cali65e44865f95: Link UP Nov 1 00:25:49.188493 systemd-networkd[1380]: cali65e44865f95: Gained carrier Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.107 [INFO][4799] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-csi--node--driver--8hdgb-eth0 csi-node-driver- calico-system 12cca151-8712-4604-9035-7f2e07caab0c 1045 0 2025-11-01 00:25:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-26-141 csi-node-driver-8hdgb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali65e44865f95 [] [] }} ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.107 [INFO][4799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.139 [INFO][4811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" HandleID="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.140 [INFO][4811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" HandleID="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-26-141", "pod":"csi-node-driver-8hdgb", "timestamp":"2025-11-01 00:25:49.139979146 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.140 [INFO][4811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.140 [INFO][4811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.140 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.147 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.154 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.159 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.161 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.163 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.163 [INFO][4811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.164 [INFO][4811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38 Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.169 [INFO][4811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.176 [INFO][4811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.72/26] block=192.168.127.64/26 handle="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.176 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.72/26] handle="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" host="172-234-26-141" Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.176 [INFO][4811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:49.202106 containerd[1464]: 2025-11-01 00:25:49.176 [INFO][4811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.72/26] IPv6=[] ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" HandleID="k8s-pod-network.1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.204394 containerd[1464]: 2025-11-01 00:25:49.179 [INFO][4799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-csi--node--driver--8hdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12cca151-8712-4604-9035-7f2e07caab0c", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"csi-node-driver-8hdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65e44865f95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:49.204394 containerd[1464]: 2025-11-01 00:25:49.179 [INFO][4799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.72/32] ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.204394 containerd[1464]: 2025-11-01 00:25:49.179 [INFO][4799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65e44865f95 ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.204394 containerd[1464]: 2025-11-01 00:25:49.183 [INFO][4799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.204394 containerd[1464]: 2025-11-01 00:25:49.184 [INFO][4799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-csi--node--driver--8hdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12cca151-8712-4604-9035-7f2e07caab0c", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38", Pod:"csi-node-driver-8hdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65e44865f95", MAC:"e6:66:5b:0b:95:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:49.204394 containerd[1464]: 2025-11-01 00:25:49.195 [INFO][4799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38" Namespace="calico-system" Pod="csi-node-driver-8hdgb" WorkloadEndpoint="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:25:49.231964 kubelet[2546]: E1101 00:25:49.231800 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:25:49.243390 kubelet[2546]: E1101 00:25:49.242211 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:25:49.243486 containerd[1464]: time="2025-11-01T00:25:49.242214987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:49.243486 containerd[1464]: time="2025-11-01T00:25:49.242319378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:49.243486 containerd[1464]: time="2025-11-01T00:25:49.242338278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:49.250127 containerd[1464]: time="2025-11-01T00:25:49.243441267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:49.250195 kubelet[2546]: E1101 00:25:49.248570 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:49.250195 kubelet[2546]: E1101 00:25:49.248941 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:49.250605 kubelet[2546]: E1101 00:25:49.250582 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:25:49.258619 kubelet[2546]: E1101 00:25:49.256466 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:25:49.300409 systemd[1]: Started cri-containerd-1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38.scope - libcontainer container 1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38. Nov 1 00:25:49.344378 containerd[1464]: time="2025-11-01T00:25:49.344301757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8hdgb,Uid:12cca151-8712-4604-9035-7f2e07caab0c,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38\"" Nov 1 00:25:49.346508 containerd[1464]: time="2025-11-01T00:25:49.346471484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:25:49.488233 containerd[1464]: time="2025-11-01T00:25:49.486713584Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:49.488401 containerd[1464]: time="2025-11-01T00:25:49.488178076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:25:49.488401 containerd[1464]: time="2025-11-01T00:25:49.488230186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:25:49.488650 kubelet[2546]: E1101 00:25:49.488590 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:25:49.488650 kubelet[2546]: E1101 00:25:49.488641 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:25:49.488906 kubelet[2546]: E1101 00:25:49.488711 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:49.490243 containerd[1464]: time="2025-11-01T00:25:49.490179252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:25:49.640690 containerd[1464]: time="2025-11-01T00:25:49.640643975Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:49.641774 containerd[1464]: time="2025-11-01T00:25:49.641735934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:25:49.642002 containerd[1464]: time="2025-11-01T00:25:49.641818745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:25:49.642076 kubelet[2546]: E1101 00:25:49.641958 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:25:49.642076 kubelet[2546]: E1101 00:25:49.642004 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:25:49.642173 kubelet[2546]: E1101 00:25:49.642104 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:49.642278 kubelet[2546]: E1101 00:25:49.642161 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:49.920528 containerd[1464]: time="2025-11-01T00:25:49.919751733Z" level=info msg="StopPodSandbox for \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\"" Nov 1 00:25:49.994271 systemd-networkd[1380]: cali42090c753a3: Gained IPv6LL Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:49.973 [INFO][4877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:49.973 [INFO][4877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" iface="eth0" netns="/var/run/netns/cni-a5b8a0ce-a511-d9c6-c3a4-d0a7032e4865" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:49.974 [INFO][4877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" iface="eth0" netns="/var/run/netns/cni-a5b8a0ce-a511-d9c6-c3a4-d0a7032e4865" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:49.975 [INFO][4877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" iface="eth0" netns="/var/run/netns/cni-a5b8a0ce-a511-d9c6-c3a4-d0a7032e4865" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:49.975 [INFO][4877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:49.975 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:50.001 [INFO][4885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:50.001 [INFO][4885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:50.002 [INFO][4885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:50.008 [WARNING][4885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:50.008 [INFO][4885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:50.010 [INFO][4885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:50.015917 containerd[1464]: 2025-11-01 00:25:50.013 [INFO][4877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:25:50.020123 containerd[1464]: time="2025-11-01T00:25:50.018108705Z" level=info msg="TearDown network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\" successfully" Nov 1 00:25:50.020123 containerd[1464]: time="2025-11-01T00:25:50.018160446Z" level=info msg="StopPodSandbox for \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\" returns successfully" Nov 1 00:25:50.019426 systemd[1]: run-netns-cni\x2da5b8a0ce\x2da511\x2dd9c6\x2dc3a4\x2dd0a7032e4865.mount: Deactivated successfully. Nov 1 00:25:50.022815 containerd[1464]: time="2025-11-01T00:25:50.022783321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5879cb96-dlw8c,Uid:c30f1fcc-7cd9-400f-884f-bd1e3091973a,Namespace:calico-system,Attempt:1,}" Nov 1 00:25:50.179783 systemd-networkd[1380]: calice135ddd9fd: Link UP Nov 1 00:25:50.184132 systemd-networkd[1380]: calice135ddd9fd: Gained carrier Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.075 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0 calico-kube-controllers-f5879cb96- calico-system c30f1fcc-7cd9-400f-884f-bd1e3091973a 1080 0 2025-11-01 00:25:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f5879cb96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-26-141 calico-kube-controllers-f5879cb96-dlw8c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice135ddd9fd [] [] }} ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.075 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.115 [INFO][4903] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" HandleID="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.115 [INFO][4903] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" HandleID="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7280), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-26-141", "pod":"calico-kube-controllers-f5879cb96-dlw8c", "timestamp":"2025-11-01 00:25:50.115135644 +0000 UTC"}, Hostname:"172-234-26-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.115 [INFO][4903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.115 [INFO][4903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.115 [INFO][4903] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-26-141' Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.123 [INFO][4903] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.129 [INFO][4903] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.134 [INFO][4903] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.137 [INFO][4903] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.140 [INFO][4903] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.140 [INFO][4903] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.142 [INFO][4903] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.150 [INFO][4903] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.167 [INFO][4903] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.73/26] block=192.168.127.64/26 handle="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.167 [INFO][4903] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.73/26] handle="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" host="172-234-26-141" Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.167 [INFO][4903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:25:50.214243 containerd[1464]: 2025-11-01 00:25:50.167 [INFO][4903] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.73/26] IPv6=[] ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" HandleID="k8s-pod-network.de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.218293 containerd[1464]: 2025-11-01 00:25:50.171 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0", GenerateName:"calico-kube-controllers-f5879cb96-", Namespace:"calico-system", SelfLink:"", UID:"c30f1fcc-7cd9-400f-884f-bd1e3091973a", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5879cb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"", Pod:"calico-kube-controllers-f5879cb96-dlw8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice135ddd9fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:50.218293 containerd[1464]: 2025-11-01 00:25:50.172 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.73/32] ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.218293 containerd[1464]: 2025-11-01 00:25:50.172 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice135ddd9fd ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.218293 containerd[1464]: 2025-11-01 00:25:50.186 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.218293 containerd[1464]: 2025-11-01 00:25:50.186 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0", GenerateName:"calico-kube-controllers-f5879cb96-", Namespace:"calico-system", SelfLink:"", UID:"c30f1fcc-7cd9-400f-884f-bd1e3091973a", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5879cb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba", Pod:"calico-kube-controllers-f5879cb96-dlw8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice135ddd9fd", MAC:"72:6f:6b:d9:7b:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:25:50.218293 containerd[1464]: 2025-11-01 00:25:50.209 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba" Namespace="calico-system" Pod="calico-kube-controllers-f5879cb96-dlw8c" WorkloadEndpoint="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:25:50.266063 kubelet[2546]: E1101 00:25:50.265390 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:50.266063 kubelet[2546]: E1101 00:25:50.265488 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:25:50.266544 kubelet[2546]: E1101 00:25:50.265542 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:25:50.266544 kubelet[2546]: E1101 00:25:50.265591 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:25:50.269865 containerd[1464]: time="2025-11-01T00:25:50.268600317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:50.269865 containerd[1464]: time="2025-11-01T00:25:50.268676988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:50.269865 containerd[1464]: time="2025-11-01T00:25:50.268690798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:50.269865 containerd[1464]: time="2025-11-01T00:25:50.268838779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:50.321276 systemd[1]: Started cri-containerd-de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba.scope - libcontainer container de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba. Nov 1 00:25:50.379811 systemd-networkd[1380]: cali78953af04fb: Gained IPv6LL Nov 1 00:25:50.411872 containerd[1464]: time="2025-11-01T00:25:50.411821672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5879cb96-dlw8c,Uid:c30f1fcc-7cd9-400f-884f-bd1e3091973a,Namespace:calico-system,Attempt:1,} returns sandbox id \"de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba\"" Nov 1 00:25:50.414518 containerd[1464]: time="2025-11-01T00:25:50.414495323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:25:50.506195 systemd-networkd[1380]: cali78950839dc6: Gained IPv6LL Nov 1 00:25:50.542589 containerd[1464]: time="2025-11-01T00:25:50.542520170Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:50.543374 containerd[1464]: time="2025-11-01T00:25:50.543333616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:25:50.543689 containerd[1464]: time="2025-11-01T00:25:50.543417727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:25:50.543773 kubelet[2546]: E1101 00:25:50.543634 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:25:50.543773 kubelet[2546]: E1101 00:25:50.543682 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:25:50.543773 kubelet[2546]: E1101 00:25:50.543761 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5879cb96-dlw8c_calico-system(c30f1fcc-7cd9-400f-884f-bd1e3091973a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:50.544015 kubelet[2546]: E1101 00:25:50.543813 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:25:50.571416 systemd-networkd[1380]: cali65e44865f95: Gained IPv6LL Nov 1 00:25:51.266810 kubelet[2546]: E1101 00:25:51.266707 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:25:51.268623 kubelet[2546]: E1101 00:25:51.268449 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:25:52.170666 systemd-networkd[1380]: calice135ddd9fd: Gained IPv6LL Nov 1 00:25:52.268087 kubelet[2546]: E1101 00:25:52.267994 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:25:54.918868 containerd[1464]: time="2025-11-01T00:25:54.918793591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:25:55.196375 containerd[1464]: time="2025-11-01T00:25:55.195687286Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:55.197057 containerd[1464]: time="2025-11-01T00:25:55.196947224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:25:55.197144 containerd[1464]: time="2025-11-01T00:25:55.197105705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:25:55.197446 kubelet[2546]: E1101 00:25:55.197337 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:25:55.197446 kubelet[2546]: E1101 00:25:55.197392 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:25:55.198133 kubelet[2546]: E1101 00:25:55.198081 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:55.200334 containerd[1464]: time="2025-11-01T00:25:55.200259234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:25:55.351912 containerd[1464]: time="2025-11-01T00:25:55.351716722Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:55.352841 containerd[1464]: time="2025-11-01T00:25:55.352661847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:25:55.352841 containerd[1464]: time="2025-11-01T00:25:55.352737779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:25:55.352970 kubelet[2546]: E1101 00:25:55.352893 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:25:55.352970 kubelet[2546]: E1101 00:25:55.352934 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:25:55.353326 kubelet[2546]: E1101 00:25:55.353048 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:55.353326 kubelet[2546]: E1101 00:25:55.353105 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:25:56.119790 kubelet[2546]: I1101 00:25:56.118308 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:25:56.119790 kubelet[2546]: E1101 00:25:56.118830 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:25:56.276443 kubelet[2546]: E1101 00:25:56.276404 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:26:00.919856 containerd[1464]: time="2025-11-01T00:26:00.919238934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:26:01.077637 containerd[1464]: time="2025-11-01T00:26:01.077579882Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:01.078563 containerd[1464]: time="2025-11-01T00:26:01.078513567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:26:01.078730 containerd[1464]: time="2025-11-01T00:26:01.078643187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:01.078947 kubelet[2546]: E1101 00:26:01.078894 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:26:01.081123 kubelet[2546]: E1101 00:26:01.078949 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:26:01.081123 kubelet[2546]: E1101 00:26:01.079020 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t79g9_calico-system(a394b5b4-84f6-43c3-bf21-09838f083553): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:01.081123 kubelet[2546]: E1101 00:26:01.079270 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:26:02.920105 containerd[1464]: time="2025-11-01T00:26:02.919952539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:26:03.051098 containerd[1464]: time="2025-11-01T00:26:03.050087549Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:03.051098 containerd[1464]: time="2025-11-01T00:26:03.051085092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:26:03.051302 containerd[1464]: time="2025-11-01T00:26:03.051167442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:03.051465 kubelet[2546]: E1101 00:26:03.051418 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:03.051828 kubelet[2546]: E1101 00:26:03.051469 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:03.051828 kubelet[2546]: E1101 00:26:03.051629 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79458bd765-tc96j_calico-apiserver(1449e27d-cfd3-4b57-8ca8-d99ff2c00988): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:03.051828 kubelet[2546]: E1101 00:26:03.051663 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:26:03.053251 containerd[1464]: time="2025-11-01T00:26:03.053214931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:26:03.185898 containerd[1464]: time="2025-11-01T00:26:03.185740426Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:03.186870 containerd[1464]: time="2025-11-01T00:26:03.186823211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:26:03.186968 containerd[1464]: time="2025-11-01T00:26:03.186909391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:26:03.187143 kubelet[2546]: E1101 00:26:03.187103 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:26:03.187219 kubelet[2546]: E1101 00:26:03.187150 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:26:03.187315 kubelet[2546]: E1101 00:26:03.187225 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:03.188712 containerd[1464]: time="2025-11-01T00:26:03.188663319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:26:03.318360 containerd[1464]: time="2025-11-01T00:26:03.318302131Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:03.320304 containerd[1464]: time="2025-11-01T00:26:03.319419376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:26:03.320304 containerd[1464]: time="2025-11-01T00:26:03.319497296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:26:03.320461 kubelet[2546]: E1101 00:26:03.319601 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:26:03.320461 kubelet[2546]: E1101 00:26:03.319649 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:26:03.320461 kubelet[2546]: E1101 00:26:03.319723 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:03.320658 kubelet[2546]: E1101 00:26:03.319770 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:26:04.921249 containerd[1464]: time="2025-11-01T00:26:04.921196055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:26:05.055580 containerd[1464]: time="2025-11-01T00:26:05.055501516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:05.056740 containerd[1464]: time="2025-11-01T00:26:05.056620680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:26:05.056740 containerd[1464]: time="2025-11-01T00:26:05.056697290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:05.056915 kubelet[2546]: E1101 00:26:05.056867 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:05.057392 kubelet[2546]: E1101 00:26:05.056919 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:05.057392 kubelet[2546]: E1101 00:26:05.057000 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-r82hw_calico-apiserver(7decf862-2dea-422d-a655-b341baeeaa59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:05.057392 kubelet[2546]: E1101 00:26:05.057072 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:26:05.905122 containerd[1464]: time="2025-11-01T00:26:05.905073585Z" level=info msg="StopPodSandbox for \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\"" Nov 1 00:26:05.920751 containerd[1464]: time="2025-11-01T00:26:05.920538404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:05.992 [WARNING][5031] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4745334-2dc8-452d-b994-9002bb77af9f", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e", Pod:"coredns-66bc5c9577-9httv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31effdd235b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:05.993 [INFO][5031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:05.993 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" iface="eth0" netns="" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:05.993 [INFO][5031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:05.993 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:06.034 [INFO][5040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:06.035 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:06.035 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:06.043 [WARNING][5040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:06.043 [INFO][5040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:06.045 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.050396 containerd[1464]: 2025-11-01 00:26:06.048 [INFO][5031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.051828 containerd[1464]: time="2025-11-01T00:26:06.051557863Z" level=info msg="TearDown network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\" successfully" Nov 1 00:26:06.051828 containerd[1464]: time="2025-11-01T00:26:06.051580973Z" level=info msg="StopPodSandbox for \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\" returns successfully" Nov 1 00:26:06.052724 containerd[1464]: time="2025-11-01T00:26:06.052680486Z" level=info msg="RemovePodSandbox for \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\"" Nov 1 00:26:06.052778 containerd[1464]: time="2025-11-01T00:26:06.052727896Z" level=info msg="Forcibly stopping sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\"" Nov 1 00:26:06.064054 containerd[1464]: time="2025-11-01T00:26:06.063268456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:06.064695 containerd[1464]: time="2025-11-01T00:26:06.064413480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:26:06.064695 containerd[1464]: time="2025-11-01T00:26:06.064512690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:06.064769 kubelet[2546]: E1101 00:26:06.064689 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:06.065064 kubelet[2546]: E1101 00:26:06.064740 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:06.065064 kubelet[2546]: E1101 00:26:06.064853 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-4s2pm_calico-apiserver(6f8b1313-5d3a-421c-a1c3-861bc7b1da27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:06.065064 kubelet[2546]: E1101 00:26:06.064890 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.115 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4745334-2dc8-452d-b994-9002bb77af9f", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"86c61fc69763a03dcd5332394ae53573c249df2126f9076d71eddee8f94dc12e", Pod:"coredns-66bc5c9577-9httv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31effdd235b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.115 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.115 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" iface="eth0" netns="" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.115 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.115 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.160 [INFO][5061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.160 [INFO][5061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.160 [INFO][5061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.169 [WARNING][5061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.169 [INFO][5061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" HandleID="k8s-pod-network.46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Workload="172--234--26--141-k8s-coredns--66bc5c9577--9httv-eth0" Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.172 [INFO][5061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.177464 containerd[1464]: 2025-11-01 00:26:06.174 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b" Nov 1 00:26:06.177464 containerd[1464]: time="2025-11-01T00:26:06.177166078Z" level=info msg="TearDown network for sandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\" successfully" Nov 1 00:26:06.183991 containerd[1464]: time="2025-11-01T00:26:06.183692163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:06.183991 containerd[1464]: time="2025-11-01T00:26:06.183825493Z" level=info msg="RemovePodSandbox \"46170e597a2b600a6bd189910c402671a4b14d7e981910a27760ce049bf8232b\" returns successfully" Nov 1 00:26:06.184866 containerd[1464]: time="2025-11-01T00:26:06.184667326Z" level=info msg="StopPodSandbox for \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\"" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.233 [WARNING][5076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a394b5b4-84f6-43c3-bf21-09838f083553", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8", Pod:"goldmane-7c778bb748-t79g9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b150de3ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.235 [INFO][5076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.235 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" iface="eth0" netns="" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.235 [INFO][5076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.235 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.274 [INFO][5083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.274 [INFO][5083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.274 [INFO][5083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.281 [WARNING][5083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.281 [INFO][5083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.282 [INFO][5083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.287334 containerd[1464]: 2025-11-01 00:26:06.284 [INFO][5076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.288238 containerd[1464]: time="2025-11-01T00:26:06.287920849Z" level=info msg="TearDown network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\" successfully" Nov 1 00:26:06.288238 containerd[1464]: time="2025-11-01T00:26:06.287989150Z" level=info msg="StopPodSandbox for \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\" returns successfully" Nov 1 00:26:06.288666 containerd[1464]: time="2025-11-01T00:26:06.288645962Z" level=info msg="RemovePodSandbox for \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\"" Nov 1 00:26:06.288969 containerd[1464]: time="2025-11-01T00:26:06.288785452Z" level=info msg="Forcibly stopping sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\"" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.335 [WARNING][5097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a394b5b4-84f6-43c3-bf21-09838f083553", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"78853980f1b0d39e1c44b5a89daf44cd8d0c25bd0a53574c6e923491b7937de8", Pod:"goldmane-7c778bb748-t79g9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b150de3ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.335 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.335 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" iface="eth0" netns="" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.335 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.335 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.361 [INFO][5105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.362 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.362 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.371 [WARNING][5105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.371 [INFO][5105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" HandleID="k8s-pod-network.cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Workload="172--234--26--141-k8s-goldmane--7c778bb748--t79g9-eth0" Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.373 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.380143 containerd[1464]: 2025-11-01 00:26:06.377 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104" Nov 1 00:26:06.381106 containerd[1464]: time="2025-11-01T00:26:06.380632754Z" level=info msg="TearDown network for sandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\" successfully" Nov 1 00:26:06.385146 containerd[1464]: time="2025-11-01T00:26:06.384790519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:06.385146 containerd[1464]: time="2025-11-01T00:26:06.384833759Z" level=info msg="RemovePodSandbox \"cf576c1081338dbb942ecb542c47ae78f2dde699ddac4116fd8a580bde07b104\" returns successfully" Nov 1 00:26:06.386700 containerd[1464]: time="2025-11-01T00:26:06.386139894Z" level=info msg="StopPodSandbox for \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\"" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.435 [WARNING][5119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"7decf862-2dea-422d-a655-b341baeeaa59", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80", Pod:"calico-apiserver-57df9d5c69-r82hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42090c753a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.435 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.436 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" iface="eth0" netns="" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.436 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.436 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.466 [INFO][5127] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.467 [INFO][5127] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.467 [INFO][5127] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.474 [WARNING][5127] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.474 [INFO][5127] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.476 [INFO][5127] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.483486 containerd[1464]: 2025-11-01 00:26:06.478 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.483973 containerd[1464]: time="2025-11-01T00:26:06.483487825Z" level=info msg="TearDown network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\" successfully" Nov 1 00:26:06.483973 containerd[1464]: time="2025-11-01T00:26:06.483538056Z" level=info msg="StopPodSandbox for \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\" returns successfully" Nov 1 00:26:06.485117 containerd[1464]: time="2025-11-01T00:26:06.484473439Z" level=info msg="RemovePodSandbox for \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\"" Nov 1 00:26:06.485117 containerd[1464]: time="2025-11-01T00:26:06.484553070Z" level=info msg="Forcibly stopping sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\"" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.526 [WARNING][5143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"7decf862-2dea-422d-a655-b341baeeaa59", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"268ce8e3d53bc0f504b5f9a32f8f611c1e7b1ac6ece90b76a8d51a5ab2cd7f80", Pod:"calico-apiserver-57df9d5c69-r82hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42090c753a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.527 [INFO][5143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.527 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" iface="eth0" netns="" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.527 [INFO][5143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.527 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.553 [INFO][5150] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.553 [INFO][5150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.553 [INFO][5150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.561 [WARNING][5150] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.561 [INFO][5150] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" HandleID="k8s-pod-network.7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--r82hw-eth0" Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.562 [INFO][5150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.567593 containerd[1464]: 2025-11-01 00:26:06.565 [INFO][5143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788" Nov 1 00:26:06.567992 containerd[1464]: time="2025-11-01T00:26:06.567660828Z" level=info msg="TearDown network for sandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\" successfully" Nov 1 00:26:06.573453 containerd[1464]: time="2025-11-01T00:26:06.572680356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:06.573453 containerd[1464]: time="2025-11-01T00:26:06.572833007Z" level=info msg="RemovePodSandbox \"7382af01f9b1423a191f96bfdc631af04c1c46729aae7757fa8ee1839538a788\" returns successfully" Nov 1 00:26:06.573978 containerd[1464]: time="2025-11-01T00:26:06.573944141Z" level=info msg="StopPodSandbox for \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\"" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.618 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0", GenerateName:"calico-apiserver-79458bd765-", Namespace:"calico-apiserver", SelfLink:"", UID:"1449e27d-cfd3-4b57-8ca8-d99ff2c00988", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79458bd765", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f", Pod:"calico-apiserver-79458bd765-tc96j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78950839dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.618 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.619 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" iface="eth0" netns="" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.619 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.619 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.657 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.657 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.658 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.669 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.669 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.671 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.683232 containerd[1464]: 2025-11-01 00:26:06.673 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.683232 containerd[1464]: time="2025-11-01T00:26:06.680124065Z" level=info msg="TearDown network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\" successfully" Nov 1 00:26:06.683232 containerd[1464]: time="2025-11-01T00:26:06.680181905Z" level=info msg="StopPodSandbox for \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\" returns successfully" Nov 1 00:26:06.688057 containerd[1464]: time="2025-11-01T00:26:06.686711390Z" level=info msg="RemovePodSandbox for \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\"" Nov 1 00:26:06.688057 containerd[1464]: time="2025-11-01T00:26:06.686744350Z" level=info msg="Forcibly stopping sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\"" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.746 [WARNING][5185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0", GenerateName:"calico-apiserver-79458bd765-", Namespace:"calico-apiserver", SelfLink:"", UID:"1449e27d-cfd3-4b57-8ca8-d99ff2c00988", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79458bd765", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"b2f603ff7fd4400042277fdaff6339b3f384584a9e8cdc1b15dae12d6faf5c2f", Pod:"calico-apiserver-79458bd765-tc96j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78950839dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.747 [INFO][5185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.747 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" iface="eth0" netns="" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.747 [INFO][5185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.747 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.790 [INFO][5192] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.791 [INFO][5192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.791 [INFO][5192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.804 [WARNING][5192] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.804 [INFO][5192] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" HandleID="k8s-pod-network.2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Workload="172--234--26--141-k8s-calico--apiserver--79458bd765--tc96j-eth0" Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.807 [INFO][5192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.813244 containerd[1464]: 2025-11-01 00:26:06.810 [INFO][5185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b" Nov 1 00:26:06.814187 containerd[1464]: time="2025-11-01T00:26:06.814159153Z" level=info msg="TearDown network for sandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\" successfully" Nov 1 00:26:06.818313 containerd[1464]: time="2025-11-01T00:26:06.818284829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:06.818428 containerd[1464]: time="2025-11-01T00:26:06.818411859Z" level=info msg="RemovePodSandbox \"2f2bd2bf85c93abb68f9a0e072ecc4ba144a98ae174a93d8bf0efe2ccca84a0b\" returns successfully" Nov 1 00:26:06.819057 containerd[1464]: time="2025-11-01T00:26:06.819011282Z" level=info msg="StopPodSandbox for \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\"" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.865 [WARNING][5206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0", GenerateName:"calico-kube-controllers-f5879cb96-", Namespace:"calico-system", SelfLink:"", UID:"c30f1fcc-7cd9-400f-884f-bd1e3091973a", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5879cb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba", Pod:"calico-kube-controllers-f5879cb96-dlw8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice135ddd9fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.866 [INFO][5206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.866 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" iface="eth0" netns="" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.866 [INFO][5206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.866 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.900 [INFO][5214] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.900 [INFO][5214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.900 [INFO][5214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.906 [WARNING][5214] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.906 [INFO][5214] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.907 [INFO][5214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:06.914739 containerd[1464]: 2025-11-01 00:26:06.909 [INFO][5206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:06.914739 containerd[1464]: time="2025-11-01T00:26:06.914125674Z" level=info msg="TearDown network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\" successfully" Nov 1 00:26:06.914739 containerd[1464]: time="2025-11-01T00:26:06.914147964Z" level=info msg="StopPodSandbox for \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\" returns successfully" Nov 1 00:26:06.916060 containerd[1464]: time="2025-11-01T00:26:06.915994821Z" level=info msg="RemovePodSandbox for \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\"" Nov 1 00:26:06.916184 containerd[1464]: time="2025-11-01T00:26:06.916083291Z" level=info msg="Forcibly stopping sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\"" Nov 1 00:26:06.920534 containerd[1464]: time="2025-11-01T00:26:06.920447488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:06.967 [WARNING][5228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0", GenerateName:"calico-kube-controllers-f5879cb96-", Namespace:"calico-system", SelfLink:"", UID:"c30f1fcc-7cd9-400f-884f-bd1e3091973a", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5879cb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"de1551fe8b87fa5cae6dce404faab46681d846de9948c0e7c558e04d09b1a5ba", Pod:"calico-kube-controllers-f5879cb96-dlw8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice135ddd9fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:06.967 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:06.968 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" iface="eth0" netns="" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:06.968 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:06.968 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:07.003 [INFO][5235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:07.004 [INFO][5235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:07.004 [INFO][5235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:07.012 [WARNING][5235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:07.012 [INFO][5235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" HandleID="k8s-pod-network.83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Workload="172--234--26--141-k8s-calico--kube--controllers--f5879cb96--dlw8c-eth0" Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:07.014 [INFO][5235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.021884 containerd[1464]: 2025-11-01 00:26:07.019 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7" Nov 1 00:26:07.022802 containerd[1464]: time="2025-11-01T00:26:07.021887481Z" level=info msg="TearDown network for sandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\" successfully" Nov 1 00:26:07.028644 containerd[1464]: time="2025-11-01T00:26:07.028555805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:07.029049 containerd[1464]: time="2025-11-01T00:26:07.028867946Z" level=info msg="RemovePodSandbox \"83cb295a68d5389f75192d08240451b91a018d22e4e62d8e8348e268e7895fc7\" returns successfully" Nov 1 00:26:07.029723 containerd[1464]: time="2025-11-01T00:26:07.029676030Z" level=info msg="StopPodSandbox for \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\"" Nov 1 00:26:07.057864 containerd[1464]: time="2025-11-01T00:26:07.057705250Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:07.060501 containerd[1464]: time="2025-11-01T00:26:07.059712727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:26:07.060501 containerd[1464]: time="2025-11-01T00:26:07.059797857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:26:07.060626 kubelet[2546]: E1101 00:26:07.060121 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:07.060626 kubelet[2546]: E1101 00:26:07.060204 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:07.060626 kubelet[2546]: E1101 00:26:07.060277 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5879cb96-dlw8c_calico-system(c30f1fcc-7cd9-400f-884f-bd1e3091973a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:07.060626 kubelet[2546]: E1101 00:26:07.060307 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.078 [WARNING][5250] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f8b1313-5d3a-421c-a1c3-861bc7b1da27", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c", Pod:"calico-apiserver-57df9d5c69-4s2pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78953af04fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.079 [INFO][5250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.079 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" iface="eth0" netns="" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.079 [INFO][5250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.079 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.118 [INFO][5257] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.119 [INFO][5257] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.119 [INFO][5257] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.128 [WARNING][5257] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.128 [INFO][5257] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.131 [INFO][5257] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.138002 containerd[1464]: 2025-11-01 00:26:07.133 [INFO][5250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.138948 containerd[1464]: time="2025-11-01T00:26:07.138089056Z" level=info msg="TearDown network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\" successfully" Nov 1 00:26:07.138948 containerd[1464]: time="2025-11-01T00:26:07.138122566Z" level=info msg="StopPodSandbox for \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\" returns successfully" Nov 1 00:26:07.140864 containerd[1464]: time="2025-11-01T00:26:07.140488085Z" level=info msg="RemovePodSandbox for \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\"" Nov 1 00:26:07.140864 containerd[1464]: time="2025-11-01T00:26:07.140522555Z" level=info msg="Forcibly stopping sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\"" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.190 [WARNING][5271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0", GenerateName:"calico-apiserver-57df9d5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f8b1313-5d3a-421c-a1c3-861bc7b1da27", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57df9d5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"99c35f1734eb66ca95ca7b4d214a4efc24de6d28cb5158de171c074c6bb91f3c", Pod:"calico-apiserver-57df9d5c69-4s2pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali78953af04fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.191 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.191 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" iface="eth0" netns="" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.191 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.191 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.223 [INFO][5279] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.223 [INFO][5279] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.223 [INFO][5279] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.231 [WARNING][5279] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.232 [INFO][5279] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" HandleID="k8s-pod-network.384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Workload="172--234--26--141-k8s-calico--apiserver--57df9d5c69--4s2pm-eth0" Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.233 [INFO][5279] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.239509 containerd[1464]: 2025-11-01 00:26:07.237 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a" Nov 1 00:26:07.239993 containerd[1464]: time="2025-11-01T00:26:07.239570969Z" level=info msg="TearDown network for sandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\" successfully" Nov 1 00:26:07.247720 containerd[1464]: time="2025-11-01T00:26:07.247656439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:07.247797 containerd[1464]: time="2025-11-01T00:26:07.247732919Z" level=info msg="RemovePodSandbox \"384e9b07ec63db3b10cb2ca7a1a5795a8fb968ae20c903acfd7201cee386644a\" returns successfully" Nov 1 00:26:07.248272 containerd[1464]: time="2025-11-01T00:26:07.248236480Z" level=info msg="StopPodSandbox for \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\"" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.288 [WARNING][5294] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-csi--node--driver--8hdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12cca151-8712-4604-9035-7f2e07caab0c", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38", Pod:"csi-node-driver-8hdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65e44865f95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.288 [INFO][5294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.289 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" iface="eth0" netns="" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.289 [INFO][5294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.289 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.314 [INFO][5301] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.314 [INFO][5301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.314 [INFO][5301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.321 [WARNING][5301] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.321 [INFO][5301] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.323 [INFO][5301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.329231 containerd[1464]: 2025-11-01 00:26:07.325 [INFO][5294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.331249 containerd[1464]: time="2025-11-01T00:26:07.330340074Z" level=info msg="TearDown network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\" successfully" Nov 1 00:26:07.331249 containerd[1464]: time="2025-11-01T00:26:07.330367054Z" level=info msg="StopPodSandbox for \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\" returns successfully" Nov 1 00:26:07.331249 containerd[1464]: time="2025-11-01T00:26:07.331215427Z" level=info msg="RemovePodSandbox for \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\"" Nov 1 00:26:07.331322 containerd[1464]: time="2025-11-01T00:26:07.331247667Z" level=info msg="Forcibly stopping sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\"" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.373 [WARNING][5315] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-csi--node--driver--8hdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12cca151-8712-4604-9035-7f2e07caab0c", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"1a983211d7731adb0845b8ed7554d7faa920484d74de36fb26d8d6bfcc340f38", Pod:"csi-node-driver-8hdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65e44865f95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.374 [INFO][5315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.374 [INFO][5315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" iface="eth0" netns="" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.374 [INFO][5315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.374 [INFO][5315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.419 [INFO][5322] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.419 [INFO][5322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.419 [INFO][5322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.428 [WARNING][5322] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.428 [INFO][5322] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" HandleID="k8s-pod-network.5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Workload="172--234--26--141-k8s-csi--node--driver--8hdgb-eth0" Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.431 [INFO][5322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.436287 containerd[1464]: 2025-11-01 00:26:07.433 [INFO][5315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa" Nov 1 00:26:07.436703 containerd[1464]: time="2025-11-01T00:26:07.436338563Z" level=info msg="TearDown network for sandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\" successfully" Nov 1 00:26:07.443390 containerd[1464]: time="2025-11-01T00:26:07.443344338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:07.443500 containerd[1464]: time="2025-11-01T00:26:07.443454288Z" level=info msg="RemovePodSandbox \"5d021e0d0826e07c7c598fe938c7f87072059c299731b03c21c9d8c7989512fa\" returns successfully" Nov 1 00:26:07.444201 containerd[1464]: time="2025-11-01T00:26:07.444140070Z" level=info msg="StopPodSandbox for \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\"" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.511 [WARNING][5336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5b66f29d-a0c7-459a-a622-8bd163fa7e38", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89", Pod:"coredns-66bc5c9577-k8blm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califda2a979fa0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.511 [INFO][5336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.511 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" iface="eth0" netns="" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.511 [INFO][5336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.511 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.554 [INFO][5344] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.557 [INFO][5344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.557 [INFO][5344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.578 [WARNING][5344] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.578 [INFO][5344] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.580 [INFO][5344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.588371 containerd[1464]: 2025-11-01 00:26:07.584 [INFO][5336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.589371 containerd[1464]: time="2025-11-01T00:26:07.588452357Z" level=info msg="TearDown network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\" successfully" Nov 1 00:26:07.589371 containerd[1464]: time="2025-11-01T00:26:07.588483717Z" level=info msg="StopPodSandbox for \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\" returns successfully" Nov 1 00:26:07.589371 containerd[1464]: time="2025-11-01T00:26:07.589267170Z" level=info msg="RemovePodSandbox for \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\"" Nov 1 00:26:07.589371 containerd[1464]: time="2025-11-01T00:26:07.589291820Z" level=info msg="Forcibly stopping sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\"" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.685 [WARNING][5358] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5b66f29d-a0c7-459a-a622-8bd163fa7e38", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-26-141", ContainerID:"a2b85cb29462145c9700c7edc5ff8bf0de10f05a7c706a9a99c42ca1992c2b89", Pod:"coredns-66bc5c9577-k8blm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califda2a979fa0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.685 [INFO][5358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.685 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" iface="eth0" netns="" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.685 [INFO][5358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.685 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.712 [INFO][5365] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.713 [INFO][5365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.713 [INFO][5365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.718 [WARNING][5365] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.718 [INFO][5365] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" HandleID="k8s-pod-network.52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Workload="172--234--26--141-k8s-coredns--66bc5c9577--k8blm-eth0" Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.719 [INFO][5365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.724631 containerd[1464]: 2025-11-01 00:26:07.722 [INFO][5358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873" Nov 1 00:26:07.724631 containerd[1464]: time="2025-11-01T00:26:07.724224801Z" level=info msg="TearDown network for sandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\" successfully" Nov 1 00:26:07.731071 containerd[1464]: time="2025-11-01T00:26:07.729299740Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:07.731071 containerd[1464]: time="2025-11-01T00:26:07.729360830Z" level=info msg="RemovePodSandbox \"52d23000cb042a0bb1d6892bc40273b494e9e31d3cc986f53e35e30d09d3a873\" returns successfully" Nov 1 00:26:07.731071 containerd[1464]: time="2025-11-01T00:26:07.730359904Z" level=info msg="StopPodSandbox for \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\"" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.769 [WARNING][5379] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" WorkloadEndpoint="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.769 [INFO][5379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.769 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" iface="eth0" netns="" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.770 [INFO][5379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.770 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.798 [INFO][5386] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.798 [INFO][5386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.798 [INFO][5386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.804 [WARNING][5386] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.804 [INFO][5386] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.806 [INFO][5386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.812299 containerd[1464]: 2025-11-01 00:26:07.810 [INFO][5379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.813628 containerd[1464]: time="2025-11-01T00:26:07.812817888Z" level=info msg="TearDown network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\" successfully" Nov 1 00:26:07.813628 containerd[1464]: time="2025-11-01T00:26:07.812883218Z" level=info msg="StopPodSandbox for \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\" returns successfully" Nov 1 00:26:07.813628 containerd[1464]: time="2025-11-01T00:26:07.813493921Z" level=info msg="RemovePodSandbox for \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\"" Nov 1 00:26:07.813628 containerd[1464]: time="2025-11-01T00:26:07.813585891Z" level=info msg="Forcibly stopping sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\"" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.858 [WARNING][5400] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" WorkloadEndpoint="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.859 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.859 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" iface="eth0" netns="" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.859 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.860 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.896 [INFO][5407] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.897 [INFO][5407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.897 [INFO][5407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.904 [WARNING][5407] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.904 [INFO][5407] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" HandleID="k8s-pod-network.db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Workload="172--234--26--141-k8s-whisker--78997778df--lcfj5-eth0" Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.906 [INFO][5407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:07.912154 containerd[1464]: 2025-11-01 00:26:07.908 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c" Nov 1 00:26:07.912735 containerd[1464]: time="2025-11-01T00:26:07.912475445Z" level=info msg="TearDown network for sandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\" successfully" Nov 1 00:26:07.917599 containerd[1464]: time="2025-11-01T00:26:07.917530842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:26:07.917693 containerd[1464]: time="2025-11-01T00:26:07.917596733Z" level=info msg="RemovePodSandbox \"db0eca4eca1c3451839d39996a5af8c542e1f0088870a007fed67e5bb19b7c2c\" returns successfully" Nov 1 00:26:07.921885 kubelet[2546]: E1101 00:26:07.921815 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:26:13.925600 kubelet[2546]: E1101 00:26:13.925541 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:26:14.919069 kubelet[2546]: E1101 00:26:14.919001 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:26:16.920807 kubelet[2546]: E1101 00:26:16.920701 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:26:19.923058 kubelet[2546]: E1101 00:26:19.921377 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:26:19.924177 containerd[1464]: time="2025-11-01T00:26:19.923762605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:26:19.928244 kubelet[2546]: E1101 00:26:19.927462 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:26:20.091792 containerd[1464]: time="2025-11-01T00:26:20.091630415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:20.094254 containerd[1464]: time="2025-11-01T00:26:20.093022747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:26:20.094254 containerd[1464]: time="2025-11-01T00:26:20.094064282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:26:20.094723 kubelet[2546]: E1101 00:26:20.094676 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:26:20.094723 kubelet[2546]: E1101 00:26:20.094720 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:26:20.094849 kubelet[2546]: E1101 00:26:20.094784 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:20.096734 containerd[1464]: time="2025-11-01T00:26:20.096566660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:26:20.228770 containerd[1464]: time="2025-11-01T00:26:20.227966444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:20.229720 containerd[1464]: time="2025-11-01T00:26:20.229614174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:26:20.231122 containerd[1464]: time="2025-11-01T00:26:20.231051537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:26:20.231540 kubelet[2546]: E1101 00:26:20.231309 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:26:20.231540 kubelet[2546]: E1101 00:26:20.231355 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:26:20.231540 kubelet[2546]: E1101 00:26:20.231443 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:20.231657 kubelet[2546]: E1101 00:26:20.231500 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:26:21.919551 kubelet[2546]: E1101 00:26:21.919000 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:26:27.918639 kubelet[2546]: E1101 00:26:27.917830 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:26:28.922468 containerd[1464]: time="2025-11-01T00:26:28.922212251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:26:29.098393 containerd[1464]: time="2025-11-01T00:26:29.098192484Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:29.099639 containerd[1464]: time="2025-11-01T00:26:29.098959451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:26:29.099639 containerd[1464]: time="2025-11-01T00:26:29.099013611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:29.101374 kubelet[2546]: E1101 00:26:29.099973 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:26:29.101374 kubelet[2546]: E1101 00:26:29.100015 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:26:29.101374 kubelet[2546]: E1101 00:26:29.100204 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t79g9_calico-system(a394b5b4-84f6-43c3-bf21-09838f083553): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:29.101374 kubelet[2546]: E1101 00:26:29.100241 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:26:29.105301 containerd[1464]: time="2025-11-01T00:26:29.100969043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:26:29.258290 containerd[1464]: time="2025-11-01T00:26:29.257882190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:29.259257 containerd[1464]: time="2025-11-01T00:26:29.259127615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:26:29.259257 containerd[1464]: time="2025-11-01T00:26:29.259202605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:29.259877 kubelet[2546]: E1101 00:26:29.259707 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:29.259877 kubelet[2546]: E1101 00:26:29.259791 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:29.260274 kubelet[2546]: E1101 00:26:29.260111 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79458bd765-tc96j_calico-apiserver(1449e27d-cfd3-4b57-8ca8-d99ff2c00988): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:29.260274 kubelet[2546]: E1101 00:26:29.260199 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:26:30.919692 kubelet[2546]: E1101 00:26:30.918878 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:26:30.922580 containerd[1464]: time="2025-11-01T00:26:30.921669881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:26:31.064701 containerd[1464]: time="2025-11-01T00:26:31.064503669Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:31.065481 containerd[1464]: time="2025-11-01T00:26:31.065349095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:26:31.065481 containerd[1464]: time="2025-11-01T00:26:31.065432375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:26:31.065619 kubelet[2546]: E1101 00:26:31.065582 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:26:31.065661 kubelet[2546]: E1101 00:26:31.065620 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:26:31.065706 kubelet[2546]: E1101 00:26:31.065681 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:31.067873 containerd[1464]: time="2025-11-01T00:26:31.067695367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:26:31.265493 containerd[1464]: time="2025-11-01T00:26:31.265123051Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:31.267681 containerd[1464]: time="2025-11-01T00:26:31.266809496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:26:31.268158 containerd[1464]: time="2025-11-01T00:26:31.268076021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:26:31.269072 kubelet[2546]: E1101 00:26:31.268279 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:26:31.269072 kubelet[2546]: E1101 00:26:31.268324 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:26:31.269072 kubelet[2546]: E1101 00:26:31.268437 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:31.269295 kubelet[2546]: E1101 00:26:31.268476 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:26:31.923977 containerd[1464]: time="2025-11-01T00:26:31.923040535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:26:32.084197 containerd[1464]: time="2025-11-01T00:26:32.084116288Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:32.085169 containerd[1464]: time="2025-11-01T00:26:32.085101945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:26:32.085213 containerd[1464]: time="2025-11-01T00:26:32.085177334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:32.085494 kubelet[2546]: E1101 00:26:32.085434 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:32.085494 kubelet[2546]: E1101 00:26:32.085484 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:32.086148 kubelet[2546]: E1101 00:26:32.085571 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-4s2pm_calico-apiserver(6f8b1313-5d3a-421c-a1c3-861bc7b1da27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:32.086148 kubelet[2546]: E1101 00:26:32.085608 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:26:32.920281 kubelet[2546]: E1101 00:26:32.920178 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:26:33.919950 kubelet[2546]: E1101 00:26:33.919511 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:26:33.924130 containerd[1464]: time="2025-11-01T00:26:33.923863376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:26:34.083276 containerd[1464]: time="2025-11-01T00:26:34.083213445Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:34.085218 containerd[1464]: time="2025-11-01T00:26:34.084517021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:26:34.085218 containerd[1464]: time="2025-11-01T00:26:34.084588561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:34.085527 kubelet[2546]: E1101 00:26:34.084731 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:34.085527 kubelet[2546]: E1101 00:26:34.084772 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:34.085527 kubelet[2546]: E1101 00:26:34.084847 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-r82hw_calico-apiserver(7decf862-2dea-422d-a655-b341baeeaa59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:34.085527 kubelet[2546]: E1101 00:26:34.084880 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:26:36.920964 containerd[1464]: time="2025-11-01T00:26:36.920900186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:26:37.055556 containerd[1464]: time="2025-11-01T00:26:37.055475157Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:37.056663 containerd[1464]: time="2025-11-01T00:26:37.056624964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:26:37.056846 containerd[1464]: time="2025-11-01T00:26:37.056704514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:26:37.056964 kubelet[2546]: E1101 00:26:37.056859 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:37.056964 kubelet[2546]: E1101 00:26:37.056915 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:37.057367 kubelet[2546]: E1101 00:26:37.057002 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5879cb96-dlw8c_calico-system(c30f1fcc-7cd9-400f-884f-bd1e3091973a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:37.057367 kubelet[2546]: E1101 00:26:37.057073 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:26:37.918579 kubelet[2546]: E1101 00:26:37.918531 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:26:42.919335 kubelet[2546]: E1101 00:26:42.919272 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:26:43.921237 kubelet[2546]: E1101 00:26:43.920834 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:26:44.919213 kubelet[2546]: E1101 00:26:44.917945 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:26:44.920412 kubelet[2546]: E1101 00:26:44.920332 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:26:44.920510 kubelet[2546]: E1101 00:26:44.920476 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:26:45.924063 kubelet[2546]: E1101 00:26:45.923987 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:26:46.921607 kubelet[2546]: E1101 00:26:46.920585 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:26:51.922696 kubelet[2546]: E1101 00:26:51.922250 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:26:54.784848 systemd[1]: Started sshd@7-172.234.26.141:22-139.178.68.195:44266.service - OpenSSH per-connection server daemon (139.178.68.195:44266). Nov 1 00:26:55.111105 sshd[5461]: Accepted publickey for core from 139.178.68.195 port 44266 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:26:55.114071 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:55.119375 systemd-logind[1448]: New session 8 of user core. Nov 1 00:26:55.127147 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:26:55.450512 sshd[5461]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:55.455533 systemd[1]: sshd@7-172.234.26.141:22-139.178.68.195:44266.service: Deactivated successfully. Nov 1 00:26:55.459787 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:26:55.461197 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:26:55.462998 systemd-logind[1448]: Removed session 8. Nov 1 00:26:56.918588 kubelet[2546]: E1101 00:26:56.918520 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:26:57.921971 kubelet[2546]: E1101 00:26:57.921216 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:26:57.927152 kubelet[2546]: E1101 00:26:57.927132 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:26:57.928319 kubelet[2546]: E1101 00:26:57.928285 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:26:57.928738 kubelet[2546]: E1101 00:26:57.928712 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:26:57.929405 kubelet[2546]: E1101 00:26:57.929102 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:26:58.919155 kubelet[2546]: E1101 00:26:58.918991 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:27:00.518580 systemd[1]: Started sshd@8-172.234.26.141:22-139.178.68.195:44282.service - OpenSSH per-connection server daemon (139.178.68.195:44282). Nov 1 00:27:00.855121 sshd[5496]: Accepted publickey for core from 139.178.68.195 port 44282 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:00.856972 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:00.862773 systemd-logind[1448]: New session 9 of user core. Nov 1 00:27:00.868286 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:27:01.226562 sshd[5496]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:01.231384 systemd[1]: sshd@8-172.234.26.141:22-139.178.68.195:44282.service: Deactivated successfully. Nov 1 00:27:01.234977 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:27:01.236620 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:27:01.237891 systemd-logind[1448]: Removed session 9. Nov 1 00:27:03.921552 kubelet[2546]: E1101 00:27:03.921286 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:27:06.295237 systemd[1]: Started sshd@9-172.234.26.141:22-139.178.68.195:54904.service - OpenSSH per-connection server daemon (139.178.68.195:54904). Nov 1 00:27:06.625400 sshd[5518]: Accepted publickey for core from 139.178.68.195 port 54904 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:06.627269 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:06.634273 systemd-logind[1448]: New session 10 of user core. Nov 1 00:27:06.641175 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:27:06.944398 sshd[5518]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:06.950296 systemd[1]: sshd@9-172.234.26.141:22-139.178.68.195:54904.service: Deactivated successfully. Nov 1 00:27:06.954668 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:27:06.956436 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:27:06.958267 systemd-logind[1448]: Removed session 10. Nov 1 00:27:07.009304 systemd[1]: Started sshd@10-172.234.26.141:22-139.178.68.195:54914.service - OpenSSH per-connection server daemon (139.178.68.195:54914). Nov 1 00:27:07.340782 sshd[5532]: Accepted publickey for core from 139.178.68.195 port 54914 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:07.342836 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:07.351202 systemd-logind[1448]: New session 11 of user core. Nov 1 00:27:07.356191 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:27:07.692666 sshd[5532]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:07.697380 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:27:07.697735 systemd[1]: sshd@10-172.234.26.141:22-139.178.68.195:54914.service: Deactivated successfully. Nov 1 00:27:07.701200 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:27:07.702561 systemd-logind[1448]: Removed session 11. Nov 1 00:27:07.759300 systemd[1]: Started sshd@11-172.234.26.141:22-139.178.68.195:54918.service - OpenSSH per-connection server daemon (139.178.68.195:54918). Nov 1 00:27:08.093671 sshd[5548]: Accepted publickey for core from 139.178.68.195 port 54918 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:08.095534 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:08.100355 systemd-logind[1448]: New session 12 of user core. Nov 1 00:27:08.109353 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:27:08.411394 sshd[5548]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:08.416960 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:27:08.419770 systemd[1]: sshd@11-172.234.26.141:22-139.178.68.195:54918.service: Deactivated successfully. Nov 1 00:27:08.422839 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:27:08.426192 systemd-logind[1448]: Removed session 12. Nov 1 00:27:09.919996 containerd[1464]: time="2025-11-01T00:27:09.919847387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:27:10.062474 containerd[1464]: time="2025-11-01T00:27:10.062360853Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:10.063660 containerd[1464]: time="2025-11-01T00:27:10.063618972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:27:10.064078 kubelet[2546]: E1101 00:27:10.063854 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:10.064078 kubelet[2546]: E1101 00:27:10.063893 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:10.064078 kubelet[2546]: E1101 00:27:10.063994 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:10.065658 containerd[1464]: time="2025-11-01T00:27:10.063704852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:27:10.066745 containerd[1464]: time="2025-11-01T00:27:10.066523481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:27:10.200207 containerd[1464]: time="2025-11-01T00:27:10.200060535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:10.201836 containerd[1464]: time="2025-11-01T00:27:10.201622584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:27:10.201836 containerd[1464]: time="2025-11-01T00:27:10.201715164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:10.202365 kubelet[2546]: E1101 00:27:10.202173 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:10.202365 kubelet[2546]: E1101 00:27:10.202252 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:10.203192 kubelet[2546]: E1101 00:27:10.202481 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fd7bd9949-qt64t_calico-system(009577cc-d930-45a6-aee8-0f7207b1b9a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:10.203192 kubelet[2546]: E1101 00:27:10.202535 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:27:10.918733 kubelet[2546]: E1101 00:27:10.918653 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:27:10.919658 kubelet[2546]: E1101 00:27:10.918866 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:27:10.921238 kubelet[2546]: E1101 00:27:10.921153 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:27:12.920424 containerd[1464]: time="2025-11-01T00:27:12.920387258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:27:13.476689 systemd[1]: Started sshd@12-172.234.26.141:22-139.178.68.195:52696.service - OpenSSH per-connection server daemon (139.178.68.195:52696). Nov 1 00:27:13.815372 sshd[5573]: Accepted publickey for core from 139.178.68.195 port 52696 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:13.817226 sshd[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:13.825866 systemd-logind[1448]: New session 13 of user core. Nov 1 00:27:13.829609 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:27:14.147388 sshd[5573]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:14.156332 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:27:14.157537 systemd[1]: sshd@12-172.234.26.141:22-139.178.68.195:52696.service: Deactivated successfully. Nov 1 00:27:14.164431 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:27:14.166320 systemd-logind[1448]: Removed session 13. Nov 1 00:27:14.171609 containerd[1464]: time="2025-11-01T00:27:14.171576219Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:14.173491 containerd[1464]: time="2025-11-01T00:27:14.173351597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:27:14.174261 containerd[1464]: time="2025-11-01T00:27:14.173440167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:14.174351 kubelet[2546]: E1101 00:27:14.174300 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:14.174758 kubelet[2546]: E1101 00:27:14.174352 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:14.174758 kubelet[2546]: E1101 00:27:14.174510 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t79g9_calico-system(a394b5b4-84f6-43c3-bf21-09838f083553): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:14.174758 kubelet[2546]: E1101 00:27:14.174540 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:27:14.175568 containerd[1464]: time="2025-11-01T00:27:14.175042907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:14.227188 systemd[1]: Started sshd@13-172.234.26.141:22-139.178.68.195:52710.service - OpenSSH per-connection server daemon (139.178.68.195:52710). Nov 1 00:27:14.322696 containerd[1464]: time="2025-11-01T00:27:14.322639473Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:14.323531 containerd[1464]: time="2025-11-01T00:27:14.323481374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:14.323617 containerd[1464]: time="2025-11-01T00:27:14.323571163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:14.324255 kubelet[2546]: E1101 00:27:14.323794 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:14.324255 kubelet[2546]: E1101 00:27:14.323835 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:14.324255 kubelet[2546]: E1101 00:27:14.323897 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79458bd765-tc96j_calico-apiserver(1449e27d-cfd3-4b57-8ca8-d99ff2c00988): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:14.324255 kubelet[2546]: E1101 00:27:14.323928 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:27:14.597533 sshd[5586]: Accepted publickey for core from 139.178.68.195 port 52710 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:14.601949 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:14.609823 systemd-logind[1448]: New session 14 of user core. Nov 1 00:27:14.617158 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:27:14.917821 kubelet[2546]: E1101 00:27:14.917666 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:27:15.121290 sshd[5586]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:15.125986 systemd[1]: sshd@13-172.234.26.141:22-139.178.68.195:52710.service: Deactivated successfully. Nov 1 00:27:15.126805 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:27:15.131851 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:27:15.135867 systemd-logind[1448]: Removed session 14. Nov 1 00:27:15.191165 systemd[1]: Started sshd@14-172.234.26.141:22-139.178.68.195:52714.service - OpenSSH per-connection server daemon (139.178.68.195:52714). Nov 1 00:27:15.516143 sshd[5597]: Accepted publickey for core from 139.178.68.195 port 52714 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:15.517549 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:15.523318 systemd-logind[1448]: New session 15 of user core. Nov 1 00:27:15.530755 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:27:15.919679 kubelet[2546]: E1101 00:27:15.919595 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:27:16.475354 sshd[5597]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:16.478907 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:27:16.481194 systemd[1]: sshd@14-172.234.26.141:22-139.178.68.195:52714.service: Deactivated successfully. Nov 1 00:27:16.483791 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:27:16.489594 systemd-logind[1448]: Removed session 15. Nov 1 00:27:16.540335 systemd[1]: Started sshd@15-172.234.26.141:22-139.178.68.195:52724.service - OpenSSH per-connection server daemon (139.178.68.195:52724). Nov 1 00:27:16.872738 sshd[5613]: Accepted publickey for core from 139.178.68.195 port 52724 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:16.876598 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:16.886400 systemd-logind[1448]: New session 16 of user core. Nov 1 00:27:16.890291 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:27:17.331615 sshd[5613]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:17.335922 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:27:17.338596 systemd[1]: sshd@15-172.234.26.141:22-139.178.68.195:52724.service: Deactivated successfully. Nov 1 00:27:17.341267 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:27:17.346838 systemd-logind[1448]: Removed session 16. Nov 1 00:27:17.435269 systemd[1]: Started sshd@16-172.234.26.141:22-139.178.68.195:52740.service - OpenSSH per-connection server daemon (139.178.68.195:52740). Nov 1 00:27:17.875054 sshd[5624]: Accepted publickey for core from 139.178.68.195 port 52740 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:17.876089 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:17.881089 systemd-logind[1448]: New session 17 of user core. Nov 1 00:27:17.887211 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:27:18.282401 sshd[5624]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:18.289160 systemd[1]: sshd@16-172.234.26.141:22-139.178.68.195:52740.service: Deactivated successfully. Nov 1 00:27:18.294550 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:27:18.298198 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:27:18.300630 systemd-logind[1448]: Removed session 17. Nov 1 00:27:19.921412 kubelet[2546]: E1101 00:27:19.921299 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Nov 1 00:27:20.919196 kubelet[2546]: E1101 00:27:20.919102 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8" Nov 1 00:27:21.922580 containerd[1464]: time="2025-11-01T00:27:21.922083202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:22.067216 containerd[1464]: time="2025-11-01T00:27:22.067128132Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:22.068552 containerd[1464]: time="2025-11-01T00:27:22.068417751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:22.068552 containerd[1464]: time="2025-11-01T00:27:22.068509511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:22.068708 kubelet[2546]: E1101 00:27:22.068659 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:22.068708 kubelet[2546]: E1101 00:27:22.068699 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:22.069117 kubelet[2546]: E1101 00:27:22.068776 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-4s2pm_calico-apiserver(6f8b1313-5d3a-421c-a1c3-861bc7b1da27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:22.069117 kubelet[2546]: E1101 00:27:22.068817 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-4s2pm" podUID="6f8b1313-5d3a-421c-a1c3-861bc7b1da27" Nov 1 00:27:22.921941 containerd[1464]: time="2025-11-01T00:27:22.921903617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:27:23.053718 containerd[1464]: time="2025-11-01T00:27:23.053663674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:23.054899 containerd[1464]: time="2025-11-01T00:27:23.054766724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:27:23.054899 containerd[1464]: time="2025-11-01T00:27:23.054849284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:27:23.055091 kubelet[2546]: E1101 00:27:23.054977 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:23.055091 kubelet[2546]: E1101 00:27:23.055049 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:23.055211 kubelet[2546]: E1101 00:27:23.055120 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:23.057380 containerd[1464]: time="2025-11-01T00:27:23.056791743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:27:23.197720 containerd[1464]: time="2025-11-01T00:27:23.197268589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:23.198498 containerd[1464]: time="2025-11-01T00:27:23.198393839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:27:23.198498 containerd[1464]: time="2025-11-01T00:27:23.198464009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:27:23.198625 kubelet[2546]: E1101 00:27:23.198587 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:23.199326 kubelet[2546]: E1101 00:27:23.198633 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:23.199326 kubelet[2546]: E1101 00:27:23.198697 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8hdgb_calico-system(12cca151-8712-4604-9035-7f2e07caab0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:23.199326 kubelet[2546]: E1101 00:27:23.198741 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8hdgb" podUID="12cca151-8712-4604-9035-7f2e07caab0c" Nov 1 00:27:23.340465 systemd[1]: Started sshd@17-172.234.26.141:22-139.178.68.195:57260.service - OpenSSH per-connection server daemon (139.178.68.195:57260). Nov 1 00:27:23.668465 sshd[5655]: Accepted publickey for core from 139.178.68.195 port 57260 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:23.670339 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:23.675574 systemd-logind[1448]: New session 18 of user core. Nov 1 00:27:23.679415 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:27:24.029007 sshd[5655]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:24.035068 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:27:24.037602 systemd[1]: sshd@17-172.234.26.141:22-139.178.68.195:57260.service: Deactivated successfully. Nov 1 00:27:24.041731 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:27:24.044649 systemd-logind[1448]: Removed session 18. Nov 1 00:27:24.919252 containerd[1464]: time="2025-11-01T00:27:24.919192332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:25.047691 containerd[1464]: time="2025-11-01T00:27:25.047644366Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:25.049270 containerd[1464]: time="2025-11-01T00:27:25.049190445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:25.049693 containerd[1464]: time="2025-11-01T00:27:25.049304365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:25.049745 kubelet[2546]: E1101 00:27:25.049489 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:25.049745 kubelet[2546]: E1101 00:27:25.049530 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:25.049745 kubelet[2546]: E1101 00:27:25.049615 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57df9d5c69-r82hw_calico-apiserver(7decf862-2dea-422d-a655-b341baeeaa59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:25.049745 kubelet[2546]: E1101 00:27:25.049652 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57df9d5c69-r82hw" podUID="7decf862-2dea-422d-a655-b341baeeaa59" Nov 1 00:27:26.312549 systemd[1]: run-containerd-runc-k8s.io-6592e6c6e12e26eb7ce02ed059b1512b75c37844a8d86b9fb9550abbfd284e14-runc.cbzwrM.mount: Deactivated successfully. Nov 1 00:27:28.920662 kubelet[2546]: E1101 00:27:28.920590 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t79g9" podUID="a394b5b4-84f6-43c3-bf21-09838f083553" Nov 1 00:27:28.922705 containerd[1464]: time="2025-11-01T00:27:28.922659208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:27:28.927621 kubelet[2546]: E1101 00:27:28.927576 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79458bd765-tc96j" podUID="1449e27d-cfd3-4b57-8ca8-d99ff2c00988" Nov 1 00:27:29.095684 systemd[1]: Started sshd@18-172.234.26.141:22-139.178.68.195:57270.service - OpenSSH per-connection server daemon (139.178.68.195:57270). Nov 1 00:27:29.110612 containerd[1464]: time="2025-11-01T00:27:29.110301817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:29.111511 containerd[1464]: time="2025-11-01T00:27:29.111472207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:27:29.111600 containerd[1464]: time="2025-11-01T00:27:29.111556047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:29.115349 kubelet[2546]: E1101 00:27:29.115311 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:29.115430 kubelet[2546]: E1101 00:27:29.115386 2546 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:29.115501 kubelet[2546]: E1101 00:27:29.115463 2546 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f5879cb96-dlw8c_calico-system(c30f1fcc-7cd9-400f-884f-bd1e3091973a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:29.115645 kubelet[2546]: E1101 00:27:29.115506 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5879cb96-dlw8c" podUID="c30f1fcc-7cd9-400f-884f-bd1e3091973a" Nov 1 00:27:29.422156 sshd[5689]: Accepted publickey for core from 139.178.68.195 port 57270 ssh2: RSA SHA256:XWUNFj89XSMPQGmWRbFBLTbRHb1wzE2BeQgMOvH+PMw Nov 1 00:27:29.424511 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:29.429958 systemd-logind[1448]: New session 19 of user core. Nov 1 00:27:29.436153 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:27:29.727985 sshd[5689]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:29.733110 systemd[1]: sshd@18-172.234.26.141:22-139.178.68.195:57270.service: Deactivated successfully. Nov 1 00:27:29.735559 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:27:29.737301 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:27:29.738873 systemd-logind[1448]: Removed session 19. Nov 1 00:27:31.921743 kubelet[2546]: E1101 00:27:31.921317 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd7bd9949-qt64t" podUID="009577cc-d930-45a6-aee8-0f7207b1b9a8"